Clever particle filters, sequential importance sampling and the optimal proposal
NASA Astrophysics Data System (ADS)
Snyder, Chris
2014-05-01
Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.
Neuromuscular fiber segmentation through particle filtering and discrete optimization
NASA Astrophysics Data System (ADS)
Dietenbeck, Thomas; Varray, François; Kybic, Jan; Basset, Olivier; Cachard, Christian
2014-03-01
We present an algorithm to segment a set of parallel, intertwined and bifurcating fibers from 3D images, targeted for the identification of neuronal fibers in very large sets of 3D confocal microscopy images. The method consists of preprocessing, local calculation of fiber probabilities, seed detection, tracking by particle filtering, global supervised seed clustering and final voxel segmentation. The preprocessing uses a novel random local probability filtering (RLPF). The fiber probabilities computation is performed by means of SVM using steerable filters and the RLPF outputs as features. The global segmentation is solved by discrete optimization. The combination of global and local approaches makes the segmentation robust, yet the individual data blocks can be processed sequentially, limiting memory consumption. The method is automatic but efficient manual interactions are possible if needed. The method is validated on the Neuromuscular Projection Fibers dataset from the Diadem Challenge. On the 15 first blocks present, our method has a 99.4% detection rate. We also compare our segmentation results to a state-of-the-art method. On average, the performances of our method are either higher or equivalent to that of the state-of-the-art method but less user interactions is needed in our approach.
Optimizing Fourier filtering for digital holographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Ooms, Thomas; Koek, Wouter; Braat, Joseph; Westerweel, Jerry
2006-02-01
In digital holographic particle image velocimetry, the particle image depth-of-focus and the inaccuracy of the measured particle position along the optical axis are relatively large in comparison to the characteristic transverse dimension of the reconstructed particle images. This is the result of a low optical numerical aperture (NA), which is limited by the relatively large pixel size of the CCD camera. Additionally, the anisotropic light scattering behaviour of the seeding particles further reduces the effective numerical aperture of the optical system and substantially increases the particle image depth-of-focus. Introducing an appropriate Fourier filter can significantly suppress this additional reduction of the NA. Experimental results illustrate that an improved Fourier filter reduces the particle image depth-of-focus. For the system described in this paper, this improvement is nearly a factor of 5. Using the improved Fourier filter comes with an acceptable reduction of the hologram intensity, so an extended exposure time is needed to maintain the exposure level.
Bounds on the performance of particle filters
NASA Astrophysics Data System (ADS)
Snyder, C.; Bengtsson, T.
2014-12-01
Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. A persistent issue with all particle filters is degeneracy of the importance weights, where one or a few particles receive almost all the weight. Considering single-step filters such as the equivalent-weights or implicit particle filters (that is, those in which the particles and weights at time tk depend only on the observations at tk and the particles and weights at tk-1), two results provide a bound on their performance. First, the optimal proposal minimizes the variance of the importance weights not only over draws of the particles at tk, but also over draws from the joint proposal for tk-1 and tk. This shows that a particle filter using the optimal proposal will have minimal degeneracy relative to all other single-step filters. Second, the asymptotic results of Bengtsson et al. (2008) and Snyder et al. (2008) also hold rigorously for the optimal proposal in the case of linear, Gaussian systems. The number of particles necessary to avoid degeneracy must increase exponentially with the variance of the incremental importance weights. In the simplest examples, that variance is proportional to the dimension of the system, though in general it depends on other factors, including the characteristics of the observing network. A rough estimate indicates that single-step particle filter applied to global numerical weather prediction will require very large numbers of particles.
NASA Astrophysics Data System (ADS)
Stevens, Mark R.; Gutchess, Dan; Checka, Neal; Snorrason, Magnús
2006-05-01
Image exploitation algorithms for Intelligence, Surveillance and Reconnaissance (ISR) and weapon systems are extremely sensitive to differences between the operating conditions (OCs) under which they are trained and the extended operating conditions (EOCs) in which the fielded algorithms are tested. As an example, terrain type is an important OC for the problem of tracking hostile vehicles from an airborne camera. A system designed to track cars driving on highways and on major city streets would probably not do well in the EOC of parking lots because of the very different dynamics. In this paper, we present a system we call ALPS for Adaptive Learning in Particle Systems. ALPS takes as input a sequence of video images and produces labeled tracks. The system detects moving targets and tracks those targets across multiple frames using a multiple hypothesis tracker (MHT) tightly coupled with a particle filter. This tracker exploits the strengths of traditional MHT based tracking algorithms by directly incorporating tree-based hypothesis considerations into the particle filter update and resampling steps. We demonstrate results in a parking lot domain tracking objects through occlusions and object interactions.
Particle Kalman Filtering: A Nonlinear Framework for Ensemble Kalman Filters
NASA Astrophysics Data System (ADS)
Hoteit, Ibrahim; Luo, Xiaodong; Pham, Dinh-Tuan; Moroz, Irene M.
2010-09-01
Optimal nonlinear filtering consists of sequentially determining the conditional probability distribution functions (pdf) of the system state, given the information of the dynamical and measurement processes and the previous measurements. Once the pdfs are obtained, one can determine different estimates, for instance, the minimum variance estimate, or the maximum a posteriori estimate, of the system state. It can be shown that, many filters, including the Kalman filter (KF) and the particle filter (PF), can be derived based on this sequential Bayesian estimation framework. In this contribution, we present a Gaussian mixture-based framework, called the particle Kalman filter (PKF), and discuss how the different EnKF methods can be derived as simplified variants of the PKF. We also discuss approaches to reducing the computational burden of the PKF in order to make it suitable for complex geosciences applications. We use the strongly nonlinear Lorenz-96 model to illustrate the performance of the PKF.
Variational Particle Filter for Imperfect Models
NASA Astrophysics Data System (ADS)
Baehr, C.
2012-12-01
Whereas classical data processing techniques work with perfect models geophysical sciences have to deal with imperfect models with spatially structured errors. For the perfect model cases, in terms of Mean-Field Markovian processes, the optimal filter is known: the Kalman estimator is the answer to the linearGaussian problem and in the general case Particle approximations are the empirical solutions to the optimal estimator. We will present another way to decompose the Bayes rule, using an one step ahead observation. This method is well adapted to the strong nonlinear or chaotic systems. Then, in order to deal with imperfect model, we suggest in this presentation to learn the (large scale) model errors using a variational correction before the resampling step of the non-linear filtering. This procedure replace the a-priori Markovian transition by a kernel conditioned to the observations. This supplementary step may be read as the use of variational particles approximation. For the numerical applications, we have chosen to show the impact of our method, first on a simple marked Poisson process with Gaussian observation noises (the time-exponential jumps are considered as model errors) and then on a 2D shallow water experiment in a closed basin, with some falling droplets as model errors.; Marked Poisson process with Gaussian observation noise filtered by four methods: classical Kalman filter, genetic particle filter, trajectorial particle filter and Kalman-particle filter. All use only 10 particles. ; 2D Shallow Water simulation with droplets errors. Results of a classical 3DVAR and of our VarPF (10 particles).
Optimal filtering and filter stability of linear stochastic delay systems
NASA Technical Reports Server (NTRS)
Kwong, R. H.-S.; Willsky, A. S.
1977-01-01
Optimal filtering equations are obtained for very general linear stochastic delay systems. Stability of the optimal filter is studied in the case where there are no delays in the observations. Using the duality between linear filtering and control, asymptotic stability of the optimal filter is proved. Finally, the cascade of the optimal filter and the deterministic optimal quadratic control system is shown to be asymptotically stable as well.
Optimization of integrated polarization filters.
Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J
2014-10-01
This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics. PMID:25360980
OPTIMIZATION OF ADVANCED FILTER SYSTEMS
R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert
1998-04-30
Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based on the issues identified. The two advanced barrier filter systems have been found to have the potential to be significantly more reliable and less expensive to operate than standard ceramic candle filter system designs. Their key development requirements are the assessment of the design and manufacturing feasibility of the ceramic filter elements, and the small-scale demonstration of their conceptual reliability and availability merits.
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
Particle flow for nonlinear filters with log-homotopy
NASA Astrophysics Data System (ADS)
Daum, Fred; Huang, Jim
2008-04-01
We describe a new nonlinear filter that is vastly superior to the classic particle filter. In particular, the computational complexity of the new filter is many orders of magnitude less than the classic particle filter with optimal estimation accuracy for problems with dimension greater than 2 or 3. We consider nonlinear estimation problems with dimensions varying from 1 to 20 that are smooth and fully coupled (i.e. dense not sparse). The new filter implements Bayes' rule using particle flow rather than with a pointwise multiplication of two functions; this avoids one of the fundamental and well known problems in particle filters, namely "particle collapse" as a result of Bayes' rule. We use a log-homotopy to derive the ODE that describes particle flow. This paper was written for normal engineers, who do not have homotopy for breakfast.
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.
Towards robust particle filters for high-dimensional systems
NASA Astrophysics Data System (ADS)
van Leeuwen, Peter Jan
2015-04-01
In recent years particle filters have matured and several variants are now available that are not degenerate for high-dimensional systems. Often they are based on ad-hoc combinations with Ensemble Kalman Filters. Unfortunately it is unclear what approximations are made when these hybrids are used. The proper way to derive particle filters for high-dimensional systems is exploring the freedom in the proposal density. It is well known that using an Ensemble Kalman Filter as proposal density (the so-called Weighted Ensemble Kalman Filter) does not work for high-dimensional systems. However, much better results are obtained when weak-constraint 4Dvar is used as proposal, leading to the implicit particle filter. Still this filter is degenerate when the number of independent observations is large. The Equivalent-Weights Particle Filter is a filter that works well in systems of arbitrary dimensions, but it contains a few tuning parameters that have to be chosen well to avoid biases. In this paper we discuss ways to derive more robust particle filters for high-dimensional systems. Using ideas from large-deviation theory and optimal transportation particle filters will be generated that are robust and work well in these systems. It will be shown that all successful filters can be derived from one general framework. Also, the performance of the filters will be tested on simple but high-dimensional systems, and, if time permits, on a high-dimensional highly nonlinear barotropic vorticity equation model.
Distributed SLAM Using Improved Particle Filter for Mobile Robot Localization
Pei, Fujun; Wu, Mei; Zhang, Simin
2014-01-01
The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362
Westinghouse Advanced Particle Filter System
Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.; Bachovchin, D.M.
1996-12-31
Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper reports on the development and status of testing of the Westinghouse Advanced Hot Gas Particle Filter (W-APF) including: W-APF integrated operation with the American Electric Power, 70 MW PFBC clean coal facility--approximately 6000 test hours completed; approximately 2500 hours of testing at the Hans Ahlstrom 10 MW PCFB facility located in Karhula, Finland; over 700 hours of operation at the Foster Wheeler 2 MW 2nd generation PFBC facility located in Livingston, New Jersey; status of Westinghouse HGF supply for the DOE Southern Company Services Power System Development Facility (PSDF) located in Wilsonville, Alabama; the status of the Westinghouse development and testing of HGF`s for Biomass Power Generation; and the status of the design and supply of the HGF unit for the 95 MW Pinon Pine IGCC Clean Coal Demonstration.
System and Apparatus for Filtering Particles
NASA Technical Reports Server (NTRS)
Agui, Juan H. (Inventor); Vijayakumar, Rajagopal (Inventor)
2015-01-01
A modular pre-filtration apparatus may be beneficial to extend the life of a filter. The apparatus may include an impactor that can collect a first set of particles in the air, and a scroll filter that can collect a second set of particles in the air. A filter may follow the pre-filtration apparatus, thus causing the life of the filter to be increased.
Fully optimal filter for ALLEGRO
NASA Astrophysics Data System (ADS)
Santostasi, Giovanni
2004-03-01
The FAST and SLOW filters are compared when applied to data from one-mode and two-mode resonant gravitational wave detectors. There is no substantial difference between the performance of two filters in the case of the one-mode detector. Notable reduction of the noise temperature is achieved for a two-mode detector when filtering the data with the FAST filter. We explain the principal reason for the better performance of the FAST filter with respect to the SLOW filter. We also observed that the performance of the FAST filter depends on the ratio Î“ between the thermal narrow band noise and the SQUID amplifier white noise.
Particle filter tracking for the banana problem
NASA Astrophysics Data System (ADS)
Romeo, Kevin; Willett, Peter; Bar-Shalom, Yaakov
2013-09-01
In this paper we present an approach for tracking with a high-bandwidth active sensor in very long range scenarios. We show that in these scenarios the extended Kalman filter is not desirable as it suffers from major consistency problems; and most flavors of particle filter suffer from a loss of diversity among particles after resampling. This leads to sample impoverishment and the divergence of the filter. In the scenarios studied, this loss of diversity can be attributed to the very low process noise. However, a regularized particle filter is shown to avoid this diversity problem while producing consistent results. The regularization is accomplished using a modified version of the Epanechnikov kernel.
OPTIMIZATION OF ADVANCED FILTER SYSTEMS
R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar
2002-06-30
Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program through cold flow and high-temperature testing. The Blasch, mullite-bonded alumina sheet filter element is the only candidate currently approaching qualification for demonstration, although this oxide-based, monolithic sheet filter element may be restricted to operating temperatures of 538 C (1000 F) or less. Many other types of ceramic and intermetallic sheet filter elements could be fabricated. The estimated capital cost of the sheet filter system is comparable to the capital cost of the standard candle filter system, although this cost estimate is very uncertain because the commercial price of sheet filter element manufacturing has not been established. The development of the sheet filter system could result in a higher reliability and availability than the standard candle filter system, but not as high as that of the inverted candle filter system. The sheet filter system has not reached the same level of development as the inverted candle filter system, and it will require more design development, filter element fabrication development, small-scale testing and evaluation before larger-scale testing could be recommended.
Adaptive particle swarm optimization.
Zhan, Zhi-Hui; Zhang, Jun; Li, Yun; Chung, Henry Shu-Hung
2009-12-01
An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity. PMID:19362911
Merging particle filter for sequential data assimilation
NASA Astrophysics Data System (ADS)
Nakano, S.; Ueno, G.; Higuchi, T.
2007-07-01
A new filtering technique for sequential data assimilation, the merging particle filter (MPF), is proposed. The MPF is devised to avoid the degeneration problem, which is inevitable in the particle filter (PF), without prohibitive computational cost. In addition, it is applicable to cases in which a nonlinear relationship exists between a state and observed data where the application of the ensemble Kalman filter (EnKF) is not effectual. In the MPF, the filtering procedure is performed based on sampling of a forecast ensemble as in the PF. However, unlike the PF, each member of a filtered ensemble is generated by merging multiple samples from the forecast ensemble such that the mean and covariance of the filtered distribution are approximately preserved. This merging of multiple samples allows the degeneration problem to be avoided. In the present study, the newly proposed MPF technique is introduced, and its performance is demonstrated experimentally.
Particle Kalman Filtering: A Nonlinear Bayesian Framework for Ensemble Kalman Filters
NASA Astrophysics Data System (ADS)
Hoteit, I.; Luo, X.; Pham, D.
2012-12-01
This contribution discusses a discrete scheme of the optimal nonlinear Bayesian filter based on the Gaussian mixture representation of the state probability distribution function. The resulting filter is similar to the particle filter, but is different from it in that the standard weight-type correction in the particle filter is complemented by the Kalman-type correction with the associated covariance matrices in the Gaussian mixture. It is therefore referred to as the particle Kalman filter (PKF). In the PKF, the solution of a nonlinear filtering problem is expressed as the weighted average of an ''ensemble of Kalman filters'' operating in parallel. Running an ensemble of Kalman filters is, however, computationally prohibitive for realistic atmospheric and oceanic data assimilation problems. The PKF is then implemented through an ''ensemble'' of ensemble Kalman filters (EnKFs), and we refer to this implementation as the particle EnKF (PEnKF). We also discuss how the different types of the EnKFs can be considered as special cases of the PEnKF. Numerical experiments with the strongly nonlinear Lorenz-96 model will be presented.
Adaptive Mallow's optimization for weighted median filters
NASA Astrophysics Data System (ADS)
Rachuri, Raghu; Rao, Sathyanarayana S.
2002-05-01
This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.
Westinghouse advanced particle filter system
Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.
1994-10-01
Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper updates the assessment of the Westinghouse hot gas filter design based on ongoing testing and analysis. Results are summarized from recent computational fluid dynamics modeling of the plenum flow during back pulse, analysis of candle stressing under cleaning and process transient conditions and testing and analysis to evaluate potential flow induced candle vibration.
Buyel, Johannes F; Gruchow, Hannah M; Fischer, Rainer
2015-01-01
The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m(-2) when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre-coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m(-2) with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037
Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer
2015-01-01
The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L mâˆ’2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested preâ€“coat filtration with dispersed diatomite, which achieved capacities of up to 120 L mâˆ’2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037
Distance estimation using RSSI and particle filter.
Sve?ko, Janja; Malajner, Marko; Gleich, Dušan
2015-03-01
This paper presents a particle filter algorithm for distance estimation using multiple antennas on the receiver's side and only one transmitter, where a received signal strength indicator (RSSI) of radio frequency was used. Two different placements of antennas were considered (parallel and circular). The physical layer of IEEE standard 802.15.4 was used for communication between transmitter and receiver. The distance was estimated as the hidden state of a stochastic system and therefore a particle filter was implemented. The RSSI acquisitions were used for the computation of important weights within the particle filter algorithm. The weighted particles were re-sampled in order to ensure proper distribution and density. Log-normal and ground reflection propagation models were used for the modeling of a prior distribution within a Bayesian inference. PMID:25457044
Testing particle filters on convective scale dynamics
NASA Astrophysics Data System (ADS)
Haslehner, Mylene; Craig, George. C.; Janjic, Tijana
2014-05-01
Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.
Optimal filtering of constant velocity torque data.
Murray, D A
1986-12-01
The purpose of this investigation was to implement an optimal filtering strategy for processing in vivo dynamometric data. The validity of employing commonly accepted analog smoothing methods was also appraised. An inert gravitational model was used to assess the filtering requirements of two Cybex II constant velocity dynamometers at 10 pre-set speeds with three selected loads. Speed settings were recorded as percentages of the servomechanism's maximum tachometer feedback voltage (10 to 100% Vfb max). Spectral analyses of unsmoothed torque and associated angular displacement curves, followed by optimized low-pass digital filtering, revealed the presence of two superimposed contaminating influences: a damped oscillation, representing successive sudden braking and releasing of the servomechanism control system; a relatively stationary oscillatory series, which was attributed to the Cybex motor. The optimal cutoff frequency for any data set was principally a positive function of % Vfb max. This association was represented for each machine by a different, but reliable, third order least-squares polynomial, which could be used to accurately predict the correct smoothing required for any speed setting. Unacceptable errors may be induced, especially when measuring peak torques, if data are inappropriately filtered. Over-smoothing disguises inertial artefacts. The use of Cybex recorder damping settings should be discouraged. Optimal filtering is a minimal requirement of valid data processing. PMID:3784873
Optimal design of active EMC filters
NASA Astrophysics Data System (ADS)
Chand, B.; Kut, T.; Dickmann, S.
2013-07-01
A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
NASA Technical Reports Server (NTRS)
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.
Computationally efficient angles-only tracking with particle flow filters
NASA Astrophysics Data System (ADS)
Costa, Russell; Wettergren, Thomas A.
2015-05-01
Particle filters represent the current state of the art in nonlinear, non-Gaussian filtering. They are easy to implement and have been applied in numerous domains. That being said, particle filters can be impractical for problems with state dimensions greater than four, if some other problem specific efficiencies can't be identified. This "curse of dimensionality" makes particle filters a computationally burdensome approach, and the associated re-sampling makes parallel processing difficult. In the past several years an alternative to particle filters dubbed particle flows has emerged as a (potentially) much more efficient method to solving non-linear, non-Gaussian problems. Particle flow filtering (unlike particle filtering) is a deterministic approach, however, its implementation entails solving an under-determined system of partial differential equations which has infinitely many potential solutions. In this work we apply the filters to angles-only target motion analysis problems in order to quantify the (if any) computational gains over standard particle filtering approaches. In particular we focus on the simplest form of particle flow filter, known as the exact particle flow filter. This form assumes a Gaussian prior and likelihood function of the unknown target states and is then linearized as is standard practice for extended Kalman filters. We implement both particle filters and particle flows and perform numerous numerical experiments for comparison.
On optimal infinite impulse response edge detection filters
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1991-01-01
The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.
Detecting Separations of Moving Objects for Particle Filter
NASA Astrophysics Data System (ADS)
Takechi, Keisuke; Kurahashi, Wataru; Fukui, Shinji; Iwahori, Yuji
This paper treats the case that a group of objects is tracked by a group of particles of the particle filter. When the object group separates, the particle filter may fail in tracking, or there may be objects which are not tracked by the filter. This paper proposes a new method for detecting separations of objects tracked by the particle filter. After the detection, a group of particles is rearranged to each object group so that all objects can be tracked by the particle filter. Results are demonstrated by experiments using real videos.
Particle filter-based track before detect algorithms
NASA Astrophysics Data System (ADS)
Boers, Yvo; Driessen, Hans
2004-01-01
In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.
Optimal digital filtering for tremor suppression.
Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R
2000-05-01
Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:¿tremor-suppression.com. PMID:10851810
GNSS data filtering optimization for ionospheric observation
NASA Astrophysics Data System (ADS)
D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.
2015-12-01
In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15Â°, 20Â° or 30Â°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy site under quiet ionospheric conditions, the SOLIDIFY optimization maximizes the quality, instead of the quantity, of the data.
Program Computes SLM Inputs To Implement Optimal Filters
NASA Technical Reports Server (NTRS)
Barton, R. Shane; Juday, Richard D.; Alvarez, Jennifer L.
1995-01-01
Minimum Euclidean Distance Optimal Filter (MEDOF) program generates filters for use in optical correlators. Analytically optimizes filters on arbitrary spatial light modulators (SLMs) of such types as coupled, binary, fully complex, and fractional-2pi-phase. Written in C language.
Optimal edge filters explain human blur detection.
McIlhagga, William H; May, Keith A
2012-01-01
Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222
Implementation and performance of FPGA-accelerated particle flow filter
NASA Astrophysics Data System (ADS)
Charalampidis, Dimitrios; Jilkov, Vesselin P.; Wu, Jiande
2015-09-01
The particle flow filters, proposed by Daum & Hwang, provide a powerful means for density-based nonlinear filtering but their computation is intense and may be prohibitive for real-time applications. This paper proposes a design for superfast implementation of the exact particle flow filter using a field-programmable gate array (FPGA) as a parallel environment to speedup computation. Simulation results from a nonlinear filtering example are presented to demonstrate that using FPGA can dramatically accelerate particle flow filters through parallelization at the expense of a tolerable loss in accuracy as compared to nonparallel implementation.
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; LEE, H.; Hopson, T. M.
2012-12-01
Hydrologic data assimilation techniques provide a means to improve river discharge forecasts through updating hydrologic model states and correcting the atmospheric forcing data via optimally combining model outputs with observations. The performance of the assimilation procedure, however, depends on the data assimilation techniques used and the amount of uncertainty in the data sets. To investigate the effects of these, we comparatively evaluate three data assimilation techniques, including ensemble Kalman filter (EnKF), particle filter (PF) and variational (VAR) technique, which assimilate discharge and synthetic soil moisture data at various uncertainty levels into the Sacramento Soil Moisture accounting (SAC-SMA) model used by the National Weather Service (NWS) for river forecasting in The United States. The study basin is Greens Bayou watershed with area of 178 km2 in eastern Texas. In the presentation, we summarize the results of the comparisons, and discuss the challenges of applying each technique for hydrologic applications.
Metal finishing wastewater pressure filter optimization
Norford, S.W.; Diener, G.A.; Martin, H.L.
1992-01-01
The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.
Metal finishing wastewater pressure filter optimization
Norford, S.W.; Diener, G.A.; Martin, H.L.
1992-12-31
The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.
Human-Manipulator Interface Using Particle Filter
Wang, Xueqian
2014-01-01
This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430
Multispectral image denoising with optimized vector bilateral filter.
Peng, Honghong; Rao, Raghuveer; Dianat, Sohail A
2014-01-01
Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios (SNRs). Typical vector bilateral filtering described in the literature does not use parameters satisfying optimality criteria. We introduce an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimization of the Stein's unbiased risk estimate of this nonlinear estimator. Along the way, we provide a plausibility argument through an analytical example as to why vector bilateral filtering outperforms bandwise 2D bilateral filtering in enhancing SNR. Experimental results show that the optimized vector bilateral filter provides improved denoising performance on multispectral images when compared with several other approaches. PMID:24184727
Motion-compensated speckle tracking via particle filtering
NASA Astrophysics Data System (ADS)
Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu
2015-07-01
Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. PMID:23958491
Application of particle filtering algorithm in image reconstruction of EMT
NASA Astrophysics Data System (ADS)
Wang, Jingwen; Wang, Xu
2015-07-01
To improve the image quality of electromagnetic tomography (EMT), a new image reconstruction method of EMT based on a particle filtering algorithm is presented. Firstly, the principle of image reconstruction of EMT is analyzed. Then the search process for the optimal solution for image reconstruction of EMT is described as a system state estimation process, and the state space model is established. Secondly, to obtain the minimum variance estimation of image reconstruction, the optimal weights of random samples obtained from the state space are calculated from the measured information. Finally, simulation experiments with five different flow regimes are performed. The experimental results have shown that the average image error of reconstruction results obtained by the method mentioned in this paper is 42.61%, and the average correlation coefficient with the original image is 0.8706, which are much better than corresponding indicators obtained by LBP, Landweber and Kalman Filter algorithms. So, this EMT image reconstruction method has high efficiency and accuracy, and provides a new method and means for EMT research.
Blended particle filters for large-dimensional chaotic dynamical systems
Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.
2014-01-01
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886
A hybrid method for optimization of the adaptive Goldstein filter
NASA Astrophysics Data System (ADS)
Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue
2014-12-01
The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.
Optimal design of AC filter circuits in HVDC converter stations
Saied, M.M.; Khader, S.A.
1995-12-31
This paper investigates the reactive power as well as the harmonic conditions on both the valve and the AC-network sides of a HVDC converter station. The effect of the AC filter circuits is accurately modeled. The program is then augmented by adding an optimization routine. It can identify the optimal filter configuration, yielding the minimum current distortion factor at the AC network terminals for a prespecified fundamental reactive power to be provided by the filter. Several parameter studies were also conducted to illustrate the effect of accidental or intentional deletion of one of the filter branches.
Optimal PHD filter for single-target detection and tracking
NASA Astrophysics Data System (ADS)
Maher, Ronald
2007-09-01
The PHD filter has attracted much international interest since its introduction in 2000. It is based on two approximations. First, it is a first-order approximation of the multitarget Bayes filter. Second, to achieve closed-form formulas for the Bayes data-update step, the predicted multitarget probability distribution must be assumed Poisson. In this paper we show how to derive an optimal PHD (OPHD) filter, given that target number does not exceed one. (That is, we restrict ourselves to the single-target detection and tracking problem.) We further show that, assuming no more than a single target, the following are identical: (1) the multitarget Bayes filter; (2) the OPHD filter; (3) the CPHD filter; and (4) the multi-hypothesis correlation (MHC) filter. We also note that all of these are generalizations of the probabilistic data association (IPDA) filter of Musicki, Evans, and Stankovic.
Optimal Filter Systems for Photometric Redshift Estimation
NASA Astrophysics Data System (ADS)
BenÃtez, N.; Moles, M.; Aguerri, J. A. L.; Alfaro, E.; Broadhurst, T.; Cabrera-CaÃ±o, J.; Castander, F. J.; Cepa, J.; CerviÃ±o, M.; CristÃ³bal-Hornillos, D.; FernÃ¡ndez-Soto, A.; GonzÃ¡lez Delgado, R. M.; Infante, L.; MÃ¡rquez, I.; MartÃnez, V. J.; Masegosa, J.; Del Olmo, A.; Perea, J.; Prada, F.; Quintana, J. M.; SÃ¡nchez, S. F.
2009-02-01
In the coming years, several cosmological surveys will rely on imaging data to estimate the redshift of galaxies, using traditional filter systems with 4-5 optical broad bands; narrower filters improve the spectral resolution, but strongly reduce the total system throughput. We explore how photometric redshift performance depends on the number of filters nf , characterizing the survey depth by the fraction of galaxies with unambiguous redshift estimates. For a combination of total exposure time and telescope imaging area of 270 hr m2, 4-5 filter systems perform significantly worse, both in completeness depth and precision, than systems with nf gsim 8 filters. Our results suggest that for low nf the color-redshift degeneracies overwhelm the improvements in photometric depth, and that even at higher nf the effective photometric redshift depth decreases much more slowly with filter width than naively expected from the reduction in the signal-to-noise ratio. Adding near-IR observations improves the performance of low-nf systems, but still the system which maximizes the photometric redshift completeness is formed by nine filters with logarithmically increasing bandwidth (constant resolution) and half-band overlap, reaching ~0.7 mag deeper, with 10% better redshift precision, than 4-5 filter systems. A system with 20 constant-width, nonoverlapping filters reaches only ~0.1 mag shallower than 4-5 filter systems, but has a precision almost three times better, Î´z = 0.014(1 + z) versus Î´z = 0.042(1 + z). We briefly discuss a practical implementation of such a photometric system: the ALHAMBRA Survey.
Optimal filter bandwidth for pulse oximetry
NASA Astrophysics Data System (ADS)
Stuban, Norbert; Niwayama, Masatsugu
2012-10-01
Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.
Symmetric Phase-Only Filtering in Particle-Image Velocimetry
NASA Technical Reports Server (NTRS)
Wemet, Mark P.
2008-01-01
Symmetrical phase-only filtering (SPOF) can be exploited to obtain substantial improvements in the results of data processing in particle-image velocimetry (PIV). In comparison with traditional PIV data processing, SPOF PIV data processing yields narrower and larger amplitude correlation peaks, thereby providing more-accurate velocity estimates. The higher signal-to-noise ratios associated with the higher amplitude correlation peaks afford greater robustness and reliability of processing. SPOF also affords superior performance in the presence of surface flare light and/or background light. SPOF algorithms can readily be incorporated into pre-existing algorithms used to process digitized image data in PIV, without significantly increasing processing times. A summary of PIV and traditional PIV data processing is prerequisite to a meaningful description of SPOF PIV processing. In PIV, a pulsed laser is used to illuminate a substantially planar region of a flowing fluid in which particles are entrained. An electronic camera records digital images of the particles at two instants of time. The components of velocity of the fluid in the illuminated plane can be obtained by determining the displacements of particles between the two illumination pulses. The objective in PIV data processing is to compute the particle displacements from the digital image data. In traditional PIV data processing, to which the present innovation applies, the two images are divided into a grid of subregions and the displacements determined from cross-correlations between the corresponding sub-regions in the first and second images. The cross-correlation process begins with the calculation of the Fourier transforms (or fast Fourier transforms) of the subregion portions of the images. The Fourier transforms from the corresponding subregions are multiplied, and this product is inverse Fourier transformed, yielding the cross-correlation intensity distribution. The average displacement of the particles across a subregion results in a displacement of the correlation peak from the center of the correlation plane. The velocity is then computed from the displacement of the correlation peak and the time between the recording of the two images. The process as described thus far is performed for all the subregions. The resulting set of velocities in grid cells amounts to a velocity vector map of the flow field recorded on the image plane. In traditional PIV processing, surface flare light and bright background light give rise to a large, broad correlation peak, at the center of the correlation plane, that can overwhelm the true particle- displacement correlation peak. This has made it necessary to resort to tedious image-masking and background-subtraction procedures to recover the relatively small amplitude particle-displacement correlation peak. SPOF is a variant of phase-only filtering (POF), which, in turn, is a variant of matched spatial filtering (MSF). In MSF, one projects a first image (denoted the input image) onto a second image (denoted the filter) as part of a computation to determine how much and what part of the filter is present in the input image. MSF is equivalent to cross-correlation. In POF, the frequency-domain content of the MSF filter is modified to produce a unitamplitude (phase-only) object. POF is implemented by normalizing the Fourier transform of the filter by its magnitude. The advantage of POFs is that they yield correlation peaks that are sharper and have higher signal-to-noise ratios than those obtained through traditional MSF. In the SPOF, these benefits of POF can be extended to PIV data processing. The SPOF yields even better performance than the POF approach, which is uniquely applicable to PIV type image data. In SPOF as now applied to PIV data processing, a subregion of the first image is treated as the input image and the corresponding subregion of the second image is treated as the filter. The Fourier transforms from both the firs and second- image subregions are normalized by the square roots of their respective magnitudes. This scheme yields optimal performance because the amounts of normalization applied to the spatial-frequency contents of the input and filter scenes are just enough to enhance their high-spatial-frequency contents while reducing their spurious low-spatial-frequency content. As a result, in SPOF PIV processing, particle-displacement correlation peaks can readily be detected above spurious background peaks, without need for masking or background subtraction.
Simultaneous Eye Tracking and Blink Detection with Interactive Particle Filters
NASA Astrophysics Data System (ADS)
Wu, Junwen; Trivedi, Mohan M.
2007-12-01
We present a system that simultaneously tracks eyes and detects eye blinks. Two interactive particle filters are used for this purpose, one for the closed eyes and the other one for the open eyes. Each particle filter is used to track the eye locations as well as the scales of the eye subjects. The set of particles that gives higher confidence is defined as the primary set and the other one is defined as the secondary set. The eye location is estimated by the primary particle filter, and whether the eye status is open or closed is also decided by the label of the primary particle filter. When a new frame comes, the secondary particle filter is reinitialized according to the estimates from the primary particle filter. We use autoregression models for describing the state transition and a classification-based model for measuring the observation. Tensor subspace analysis is used for feature extraction which is followed by a logistic regression model to give the posterior estimation. The performance is carefully evaluated from two aspects: the blink detection rate and the tracking accuracy. The blink detection rate is evaluated using videos from varying scenarios, and the tracking accuracy is given by comparing with the benchmark data obtained using the Vicon motion capturing system. The setup for obtaining benchmark data for tracking accuracy evaluation is presented and experimental results are shown. Extensive experimental evaluations validate the capability of the algorithm.
Ballistic target tracking algorithm based on improved particle filtering
NASA Astrophysics Data System (ADS)
Ning, Xiao-lei; Chen, Zhan-qi; Li, Xiao-yang
2015-10-01
Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.
Method of concurrently filtering particles and collecting gases
Mitchell, Mark A; Meike, Annemarie; Anderson, Brian L
2015-04-28
A system for concurrently filtering particles and collecting gases. Materials are be added (e.g., via coating the ceramic substrate, use of loose powder(s), or other means) to a HEPA filter (ceramic, metal, or otherwise) to collect gases (e.g., radioactive gases such as iodine). The gases could be radioactive, hazardous, or valuable gases.
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
Contrasting Particle Clogging in Soils and Granular Media Filters
NASA Astrophysics Data System (ADS)
Mays, D. C.
2005-12-01
Deposition of colloidal particles leads to permeability reduction (or clogging) in the soil geomembrane, which reduces fluxes, alters flow patterns, and limits both colloid-associated contaminant transport and delivery of colloidal reactants for purposes of remediation. Comparison of experimental results for soils and granular media filters reveals qualitatively different clogging phenomena with regard to (1) particle stabilization, (2) fluid velocity, and (3) the fractal dimension of particle deposits. These differences have important implications for contaminant hydrology, because the classical approach for understanding particles in natural environments is taken from the filtration literature, which is based on clean granular media. Accordingly, many of the relevant experiments have been performed with granular filters using media such as glass beads or quartz sand. In such filters, clogging is associated with destabilized particles, slower fluid velocity and deposits with smaller fractal dimensions. In contrast, in soils clogging is associated with stabilized particles, faster fluid velocity and deposits with larger fractal dimensions. With regard to these variables, soils are opposite to filters but identical to cake filtration. Numerous examples will be presented from the filtration literature and the soil science literature to illustrate these differing viewpoints. This analysis demonstrates that experiments on clean granular media filters should not be expected to predict particle clogging in soils, sandstones or other natural porous materials containing more than a few percent fines.
Particle filter for long range radar in RUV
NASA Astrophysics Data System (ADS)
Romeo, Kevin; Willett, Peter; Bar-Shalom, Yaakov
2014-06-01
In this paper we present an approach for tracking with a high-bandwidth active radar in long range scenarios with 3-D measurements in r-u-v coordinates. The 3-D low-process-noise scenarios considered are much more difficult than the ones we have previously investigated where measurements were in 2-D (i.e., polar coordinates). We show that in these 3-D scenarios the extended Kalman filter and its variants are not desirable as they suffer from either major consistency problems or degraded range accuracy, and most flavors of particle filter suffer from a loss of diversity among particles after resampling. This leads to sample impoverishment and divergence of the filter. In the scenarios studied, this loss of diversity can be attributed to the very low process noise. However, a regularized particle filter is shown to avoid this diversity problem while producing consistent results. The regularization is accomplished using a modified version of the Epanechnikov kernel.
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Optimization of 2D median filtering algorithm for VLIW architecture
NASA Astrophysics Data System (ADS)
Choo, Chang Y.; Tang, Ming
1999-12-01
Recently, several commercial DSP processors with VLIW (Very Long Instruction Word) architecture were introduced. The VLIW architectures offer high performance over a wide range of multimedia applications that require parallel processing. In this paper, we implement an efficient 2D median filter for VLIW architecture, particularly for Texas Instrument C62x VLIW architecture. Median filter is widely used for filtering the impulse noise while preserving edges in still images and video. The efficient median filtering requires fast sorting. The sorting algorithms were optimized using software pipelining and loop unrolling to maximize the use of the available functional units while meeting the data dependency constraints. The paper describes and lists the optimized source code for the 3 X 3 median filter using an enhanced selection sort algorithm.
Particle-filter-based phase estimation in digital holographic interferometry.
Waghmare, Rahul G; Ram Sukumar, P; Subrahmanyam, G R K S; Singh, Rakesh Kumar; Mishra, Deepak
2016-03-01
In this paper, we propose a particle-filter-based technique for the analysis of a reconstructed interference field. The particle filter and its variants are well proven as tracking filters in non-Gaussian and nonlinear situations. We propose to apply the particle filter for direct estimation of phase and its derivatives from digital holographic interferometric fringes via a signal-tracking approach on a Taylor series expanded state model and a polar-to-Cartesian-conversion-based measurement model. Computation of sample weights through non-Gaussian likelihood forms the major contribution of the proposed particle-filter-based approach compared to the existing unscented-Kalman-filter-based approach. It is observed that the proposed approach is highly robust to noise and outperforms the state-of-the-art especially at very low signal-to-noise ratios (i.e., especially in the range of -5 to 20 dB). The proposed approach, to the best of our knowledge, is the only method available for phase estimation from severely noisy fringe patterns even when the underlying phase pattern is rapidly varying and has a larger dynamic range. Simulation results and experimental data demonstrate the fact that the proposed approach is a better choice for direct phase estimation. PMID:26974901
Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design
Troncoso Romero, David Ernesto
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674
Implementation of the particle filter and the merging particle filter in parallel computing systems
NASA Astrophysics Data System (ADS)
Nakano, S.; Ueno, G.; Nakamura, K.; Higuchi, T.
2009-04-01
The particle filter (PF) is an ensemble-based algorithm which is applicable to general data assimilation problems with nonlinear dynamical system models. The PF is a rather simple algorithm which provides an approximation of a posterior probability density function by resampling a forecast ensemble. However, the PF requires a large ensemble size in order to achieve sufficient accuracy of the estimation especially for high-dimensional models. This means that we need to run a simulation model extraordinarily many times. Parallel computing offers one potential solution which could enable us to use the sufficiently large ensemble size. When we implement the PF in a parallel computing system, especially in a distributed computing system consisting of multiple nodes, one major problem is that resampling procedures requires much network traffic between nodes. This network traffic could be crucial especially when we use high-dimensional models. In this study, we assume that internal traffics in each node are much faster than inter-node traffics. Then, we consider a bi-level scheme which reduces inter-node network traffics. In this scheme, resampling procedures are performed locally in each node and inter-node communications are considered separately. An ensemble subset assigned to each node offers an approximation of a probability density function with a smaller ensemble size. If we assign a weight to each node according to an average of likelihoods in members of the ensemble subset, the whole ensemble also offers an approximation of a probability density function as a whole. Inter-node communications are determined by comparing the weights among the multiple nodes. We performed some experiments in which this bi-level scheme was applied to the 40-dimensional Lorenz 96 model. We then discuss the efficiency of this scheme in comparison with the normal scheme. The network traffic problem would be also crucial when we use the merging particle filter (MPF) which allows us to save computational cost in comparison with the PF. We also consider a similar bi-level approach on the basis of the MPF and discuss the efficiency of it.
COMPUTATIONS ON THE PERFORMANCE OF PARTICLE FILTERS AND ELECTRONIC AIR CLEANERS
The paper discusses computations on the performance of particle filters and electronic air cleaners (EACs). The collection efficiency of particle filters and ACs is calculable if certain factors can be assumed or calibrated. For fibrous particulate filters, measurement of colle...
NASA Astrophysics Data System (ADS)
Sambaer, Wannes; Zatloukal, Martin; Kimmer, Dusan
2013-04-01
Realistic SEM image based 3D filter model considering transition/free molecular flow regime, Brownian diffusion, aerodynamic slip, particle-fiber and particle-particle interactions together with a novel Euclidian distance map based methodology for the pressure drop calculation has been utilized for a polyurethane nanofiber based filter prepared via electrospinning process in order to more deeply understand the effect of particle-fiber friction coefficient on filter clogging and basic filter characteristics. Based on the performed theoretical analysis, it has been revealed that the increase in the fiber-particle friction coefficient causes, firstly, more weaker particle penetration in the filter, creation of dense top layers and generation of higher pressure drop (surface filtration) in comparison with lower particle-fiber friction coefficient filter for which deeper particle penetration takes place (depth filtration), secondly, higher filtration efficiency, thirdly, higher quality factor and finally, higher quality factor sensitivity to the increased collected particle mass. Moreover, it has been revealed that even if the particle-fiber friction coefficient is different, the cake morphology is very similar.
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
Source optimization using particle swarm optimization algorithm in photolithography
NASA Astrophysics Data System (ADS)
Wang, Lei; Li, Sikun; Wang, Xiangzhao; Yan, Guanyong; Yang, Chaoxing
2015-03-01
In recent years, with the availability of freeform sources, source optimization has emerged as one of the key techniques for achieving higher resolution without increasing the complexity of mask design. In this paper, an efficient source optimization approach using particle swarm optimization algorithm is proposed. The sources are represented by pixels and encoded into particles. The pattern fidelity is adopted as the fitness function to evaluate these particles. The source optimization approach is implemented by updating the velocities and positions of these particles. The approach is demonstrated by using two typical mask patterns, including a periodic array of contact holes and a vertical line/space design. The pattern errors are reduced by 66.1% and 39.3% respectively. Compared with the source optimization approach using genetic algorithm, the proposed approach leads to faster convergence while improving the image quality at the same time. The robustness of the proposed approach to initial sources is also verified.
An Improved Particle Filter for Service Robot Self-Localization
NASA Astrophysics Data System (ADS)
Cen, Guanghui; Matsuhira, Nobuto; Hirokawa, Junko; Ogawa, Hideki; Hagiwara, Ichiro
Mobile robot localization is a problem of determining a robot's pose in an environment, which is also one of the most basic problems in mobile robot applications. Recently, introduction of particle filters becomes the most popular approach in mobile robot localization and has been applied with great success to a variety of state estimation problems. In this paper, an particle filter is applied in the authors' service robot position tracking and global localization. Moreover, the posterior distribution of a robot pose in global localization is usually a multi-model due to the symmetry of the environment and ambiguous detected features. Considering these characteristics, we propose an improved cluster particle filter to increase the global localization robustness and accuracy. On-line experiments based detailed analysis of coordinate errors and algorithm efficiency are given. On-line experimental results also show the efficiency and robustness of the approach in the authors' service robot ApriAlpha™ Platform.
Effects of particle size and velocity on burial depth of airborne particles in glass fiber filters
Higby, D.P.
1984-11-01
Air sampling for particulate radioactive material involves collecting airborne particles on a filter and then determining the amount of radioactivity collected per unit volume of air drawn through the filter. The amount of radioactivity collected is frequently determined by directly measuring the radiation emitted from the particles collected on the filter. Counting losses caused by the particle becoming buried in the filter matrix may cause concentrations of airborne particulate radioactive materials to be underestimated by as much as 50%. Furthermore, the dose calculation for inhaled radionuclides will also be affected. The present study was designed to evaluate the extent to which particle size and sampling velocity influence burial depth in glass-fiber filters. Aerosols of high-fired /sup 239/PuO/sub 2/ were collected at various sampling velocities on glass-fiber filters. The fraction of alpha counts lost due to burial was determined as the ratio of activity detected by direct alpha count to the quantity determined by photon spectrometry. The results show that burial of airborne particles collected on glass-fiber filters appears to be a weak function of sampling velocity and particle size. Counting losses ranged from 0 to 25%. A correction that assumes losses of 10 to 15% would ensure that the concentration of airborne alpha-emitting radionuclides would not be underestimated when glass-fiber filters are used. 32 references, 21 figures, 11 tables.
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
Design of optimal correlation filters for hybrid vision systems
NASA Astrophysics Data System (ADS)
Rajan, Periasamy K.
1990-12-01
Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.
Multiswarm Particle Swarm Optimization with Transfer of the Best Particle
Wei, Xiao-peng; Zhang, Jian-xia; Zhou, Dong-sheng; Zhang, Qiang
2015-01-01
We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems. PMID:26345200
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation. PMID:24453865
Sequential Bearings-Only-Tracking Initiation with Particle Filtering Method
Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior CramÃ©r-Rao bounds are also involved for performance evaluation. PMID:24453865
Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation
Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan
2013-01-01
This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869
Optimal filtering methods to structural damage estimation under ground excitation.
Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan
2013-01-01
This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869
Optimal Recursive Digital Filters for Active Bending Stabilization
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2013-01-01
In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.
Estimating the full posterior pdf with particle filters
NASA Astrophysics Data System (ADS)
Ades, Melanie; van Leeuwen, Peter Jan
2013-04-01
The majority of data assimilation schemes rely on linearity assumptions. However as the resolution and complexity of both the numerical models and observations increases, these linearity assumptions become less appropriate. A need is arising for fully non-linear data assimilation schemes, such as particle filters. Recently, new particle filter schemes have been generated that explore the freedom in proposal densities and that are quite effective in estimating the mean of the posterior probability density function (pdf), even in very high dimensional systems. However, in non-linear data assimilation the solution to the data assimilation problem is the full posterior pdf. At the same time we can only afford a limited number of particles. Here we concentrate on the equivalent weights particle filter in conjunction with a 65,000 dimensional Barotropic Vorticity model. Specifically we test the ability of the scheme to represent the posterior in three important areas. In many actual geophysical applications, observations will be sparse and may well be unevenly distributed. We discuss the effect of changing the frequency, number and distribution of the observed variables on the ensemble representation of the posterior pdf. Specifically we show that the filter has remarkably good convergence in marginal and joint pdfs with ensemble size, and the rank histograms are quite flat, even with low observation numbers and low observation frequencies. Only when the observation frequency is much larger than the typical decorrelation time scale of the system do we see underdispersive ensembles when using 32 particles. The second area attempts to replicate the realistic situation of using a geophysical model designed without a full understanding of the error statistics of the truth. This is done by using deliberately erroneous error statistics in the ensemble equations compared to those used to generate the truth. Specifically we consider changes in the correlation length-scales and variances in the model error statistics. Again the filter is remarkably successful in generating correct posterior pdfs, although rank histograms tend to point to under- or overdispersive ensembles. One of the interesting results is that when we overestimate the model error amplitude the ensemble is underdispersive. We present an explanation for this counter-intuitive phenomenon. Finally we show that the computational effort involved in the equivalent-weights particle filter is comparable to running a simple resampling particle filter with the same number of particles.
Nonlinear Statistical Signal Processing: A Particle Filtering Approach
Candy, J
2007-09-19
A introduction to particle filtering is discussed starting with an overview of Bayesian inference from batch to sequential processors. Once the evolving Bayesian paradigm is established, simulation-based methods using sampling theory and Monte Carlo realizations are discussed. Here the usual limitations of nonlinear approximations and non-gaussian processes prevalent in classical nonlinear processing algorithms (e.g. Kalman filters) are no longer a restriction to perform Bayesian inference. It is shown how the underlying hidden or state variables are easily assimilated into this Bayesian construct. Importance sampling methods are then discussed and shown how they can be extended to sequential solutions implemented using Markovian state-space models as a natural evolution. With this in mind, the idea of a particle filter, which is a discrete representation of a probability distribution, is developed and shown how it can be implemented using sequential importance sampling/resampling methods. Finally, an application is briefly discussed comparing the performance of the particle filter designs with classical nonlinear filter implementations.
A local particle filter for high dimensional geophysical systems
NASA Astrophysics Data System (ADS)
Penny, S. G.; Miyoshi, T.
2015-12-01
A local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost. The standard Sampling Importance Resampling (SIR) particle filter is augmented with an observation-space localization approach, for which an independent analysis is computed locally at each gridpoint. The deterministic resampling approach of Kitagawa is adapted for application locally and combined with interpolation of the analysis weights to smooth the transition between neighboring points. Gaussian noise is applied with magnitude equal to the local analysis spread to prevent particle degeneracy while maintaining the estimate of the growing dynamical instabilities. The approach is validated against the Local Ensemble Transform Kalman Filter (LETKF) using the 40-variable Lorenz-96 model. The results show that: (1) the accuracy of LPF surpasses LETKF as the forecast length increases (thus increasing the degree of nonlinearity), (2) the cost of LPF is significantly lower than LETKF as the ensemble size increases, and (3) LPF prevents filter divergence experienced by LETKF in cases with non-Gaussian observation error distributions.
Lubricant wear particle analysis by filter patch extraction
Smart, C.L.
1996-07-01
Lubricating Oil Analysis (LOA) has become an important part of a comprehensive Reliability Centered Maintenance (RCM) program. However, knowing the condition of the lubricant alone does not provide a complete description of equipment reliability. Condition monitoring for equipment can be accomplished through Wear Particle Analysis (WPA). This usually involves separating suspended materials and wear products from the lubricant by magnetic (ferrographic) means. This paper will present a simple, low-cost, alternate method of particle acquisition called Filter Patch Extraction (FPE). This method removes solids, regardless of their composition, from the lubricant by vacuum filtration and deposits them onto a filter for microscopic examination similar to that of analytical ferrography. A large filter pore size retains suspended materials and permits rapid filtration of large volumes of lubricant thereby increasing the accuracy of the wear and cleanliness profile that can be established for a given machine. Qualitative trending of equipment wear and lubricant system cleanliness are easily performed with FPE. Equipment condition is determined by then characterizing the metal particles which are recovered. Examined filters are easily archived in filter holders for future reference. Equipment for FPE is inexpensive and readily available. The technique is field-portable, allowing WPA to be performed on-site, eliminating delays with remote laboratories while building customer participation and support. There are numerous advantages for using FPE in a machine condition monitoring program.
Optimization of filtering schemes for broadband astro-combs.
Chang, Guoqing; Li, Chih-Hao; Phillips, David F; Szentgyorgyi, Andrew; Walsworth, Ronald L; Kärtner, Franz X
2012-10-22
To realize a broadband, large-line-spacing astro-comb, suitable for wavelength calibration of astrophysical spectrographs, from a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three additional components: (1) one or more filter cavities to multiply the source-comb's repetition rate and thus line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highly nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb. In this paper we analyze the interplay of Fabry-Perot (FP) filter cavities with power amplifiers and nonlinear broadening fiber in the design of astro-combs optimized for radial-velocity (RV) calibration accuracy. We present analytic and numeric models and use them to evaluate a variety of FP filtering schemes (labeled as identical, co-prime, fraction-prime, and conjugate cavities), coupled to chirped-pulse amplification (CPA). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase RV error for spectrograph calibration. In general, filtering with two cavities prior to the CPA fiber amplifier outperforms an amplifier placed between the two cavities. In particular, filtering with conjugate cavities is able to provide <1 cm/s RV calibration error with >300 nm wavelength coverage. Such superior performance will facilitate the search for and characterization of Earth-like exoplanets, which requires <10 cm/s RV calibration error. PMID:23187265
An Improved Particle Filter for Target Tracking in Sensor Systems
Wang, Xue; Wang, Sheng; Ma, Jun-Jie
2007-01-01
Sensor systems are not always equipped with the ability to track targets. Sudden maneuvers of a target can have a great impact on the sensor system, which will increase the miss rate and rate of false target detection. The use of the generic particle filter (PF) algorithm is well known for target tracking, but it can not overcome the degeneracy of particles and cumulation of estimation errors. In this paper, we propose an improved PF algorithm called PF-RBF. This algorithm uses the radial-basis function network (RBFN) in the sampling step for dynamically constructing the process model from observations and updating the value of each particle. With the RBFN sampling step, PF-RBF can give an accurate proposal distribution and maintain the convergence of a sensor system. Simulation results verify that PF-RBF performs better than the Unscented Kalman Filter (UKF), PF and Unscented Particle Filter (UPF) in both robustness and accuracy whether the observation model used for the sensor system is linear or nonlinear. Moreover, the intrinsic property of PF-RBF determines that, when the particle number exceeds a certain amount, the execution time of PF-RBF is less than UPF. This makes PF-RBF a better candidate for the sensor systems which need many particles for target tracking.
Fast, parallel implementation of particle filtering on the GPU architecture
NASA Astrophysics Data System (ADS)
Gelencsér-Horváth, Anna; Tornai, Gábor János; Horváth, András; Cserey, György
2013-12-01
In this paper, we introduce a modified cellular particle filter (CPF) which we mapped on a graphics processing unit (GPU) architecture. We developed this filter adaptation using a state-of-the art CPF technique. Mapping this filter realization on a highly parallel architecture entailed a shift in the logical representation of the particles. In this process, the original two-dimensional organization is reordered as a one-dimensional ring topology. We proposed a proof-of-concept measurement on two models with an NVIDIA Fermi architecture GPU. This design achieved a 411- ?s kernel time per state and a 77-ms global running time for all states for 16,384 particles with a 256 neighbourhood size on a sequence of 24 states for a bearing-only tracking model. For a commonly used benchmark model at the same configuration, we achieved a 266- ?s kernel time per state and a 124-ms global running time for all 100 states. Kernel time includes random number generation on the GPU with curand. These results attest to the effective and fast use of the particle filter in high-dimensional, real-time applications.
Localization using omnivision-based manifold particle filters
NASA Astrophysics Data System (ADS)
Wong, Adelia; Yousefhussien, Mohammed; Ptucha, Raymond
2015-01-01
Developing precise and low-cost spatial localization algorithms is an essential component for autonomous navigation systems. Data collection must be of sufficient detail to distinguish unique locations, yet coarse enough to enable real-time processing. Active proximity sensors such as sonar and rangefinders have been used for interior localization, but sonar sensors are generally coarse and rangefinders are generally expensive. Passive sensors such as video cameras are low cost and feature-rich, but suffer from high dimensions and excessive bandwidth. This paper presents a novel approach to indoor localization using a low cost video camera and spherical mirror. Omnidirectional captured images undergo normalization and unwarping to a canonical representation more suitable for processing. Training images along with indoor maps are fed into a semi-supervised linear extension of graph embedding manifold learning algorithm to learn a low dimensional surface which represents the interior of a building. The manifold surface descriptor is used as a semantic signature for particle filter localization. Test frames are conditioned, mapped to a low dimensional surface, and then localized via an adaptive particle filter algorithm. These particles are temporally filtered for the final localization estimate. The proposed method, termed omnivision-based manifold particle filters, reduces convergence lag and increases overall efficiency.
PFLib: an object oriented MATLAB toolbox for particle filtering
NASA Astrophysics Data System (ADS)
Chen, Lingji; Lee, Chihoon; Budhiraja, Amarjit; Mehra, Raman K.
2007-04-01
Under a United States Army Small Business Technology Transfer (STTR) project, we have developed a MATLAB toolbox called PFLib to facilitate the exploration, learning and use of Particle Filters by a general user. This paper describes its object oriented design and programming interface. The software is available under a GNU GPL license.
Model Adaptation for Prognostics in a Particle Filtering Framework
NASA Technical Reports Server (NTRS)
Saha, Bhaskar; Goebel, Kai Frank
2011-01-01
One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.
Optimal filtering in multipulse sequences for nuclear quadrupole resonance detection
NASA Astrophysics Data System (ADS)
Osokin, D. Ya.; Khusnutdinov, R. R.; Mozzhukhin, G. V.; Rameev, B. Z.
2014-05-01
The application of the multipulse sequences in nuclear quadrupole resonance (NQR) detection of explosive and narcotic substances has been studied. Various approaches to increase the signal to noise ratio (SNR) of signal detection are considered. We discussed two modifications of the phase-alternated multiple-pulse sequence (PAMS): the 180° pulse sequence with a preparatory pulse and the 90° pulse sequence. The advantages of optimal filtering to detect NQR in the case of the coherent steady-state precession have been analyzed. It has been shown that this technique is effective in filtering high-frequency and low-frequency noise and increasing the reliability of NQR detection. Our analysis also shows the PAMS with 180° pulses is more effective than PSL sequence from point of view of the application of optimal filtering procedure to the steady-state NQR signal.
Optimal Correlation Filters for Images with Signal-Dependent Noise
NASA Technical Reports Server (NTRS)
Downie, John D.; Walkup, John F.
1994-01-01
We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.
Na-Faraday rotation filtering: The optimal point
Kiefer, Wilhelm; LÃ¶w, Robert; Wrachtrup, JÃ¶rg; Gerhardt, Ilja
2014-01-01
Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251
Transmit filter design methods for magnetic particle imaging
NASA Astrophysics Data System (ADS)
Zheng, Bo; Goodwill, Patrick; Conolly, Steven
2011-03-01
Magnetic particle imaging (MPI) has emerged as a new imaging modality that uses the nonlinear magnetization behavior of superparamagnetic particles. Due to the need to avoid contamination of particle signals with the simultaneous excitation signal, MPI transmit systems require different design considerations from those in MRI, where excitation and detection are temporally decoupled. Specifically, higher order harmonic distortion in the transmit spectrum can feed through to and contaminate the received signal spectrum. In a prototype MPI scanner, this distortion needs to be attenuated by 90 dB at all frequencies. In this paper, we describe two methods of filtering out harmonic distortion in the transmit spectrum. The first method uses a Butterworth topology while the second a cascaded Butterworth-elliptic topology. We show that whereas the Butterworth filter alone achieves around 16 and 32 dB attenuation at the second and third harmonics, the cascaded filter can achieve around 65 and 73 dB at these harmonics. Finally, we discuss how notch placement in the stopband can also be applied to design highpass filters for MPI detection systems.
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486
Cultural-based multiobjective particle swarm optimization.
Daneshyari, Moayed; Yen, Gary G
2011-04-01
Multiobjective particle swarm optimization (MOPSO) algorithms have been widely used to solve multiobjective optimization problems. Most MOPSOs use fixed momentum and acceleration for all particles throughout the evolutionary process. In this paper, we introduce a cultural framework to adapt the personalized flight parameters of the mutated particles in a MOPSO, namely momentum and personal and global accelerations, for each individual particle based upon various types of knowledge in "belief space," specifically situational, normative, and topographical knowledge. A comprehensive comparison of the proposed algorithm with chosen state-of-the-art MOPSOs on benchmark test functions shows that the movement of the individual particle using the adapted parameters assists the MOPSO to perform efficiently and effectively in exploring solutions close to the true Pareto front while exploiting a local search to attain diverse solutions. PMID:20837447
Selectively-informed particle swarm optimization
Gao, Yang; Du, Wenbo; Yan, Gang
2015-01-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315
Selectively-informed particle swarm optimization
NASA Astrophysics Data System (ADS)
Gao, Yang; Du, Wenbo; Yan, Gang
2015-03-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.
Degeneracy, frequency response and filtering in IMRT optimization.
Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D; Promberger, Claus
2004-07-01
This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques. PMID:15285252
Optimal color image restoration: Wiener filter and quaternion Fourier transform
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; Agaian, Sos S.
2015-03-01
In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.
Selection of optimal spectral sensitivity functions for color filter arrays.
Parmar, Manu; Reeves, Stanley J
2010-12-01
A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ?E error metric and a qualitative assessment. PMID:20519156
Optimized Beam Sculpting with Generalized Fringe-rate Filters
NASA Astrophysics Data System (ADS)
Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina
2016-03-01
We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometerâ€™s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.
Fourier Spectral Filter Array for Optimal Multispectral Imaging.
Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo
2016-04-01
Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data. PMID:26849867
Optimal matched filter design for ultrasonic NDE of coarse grain materials
NASA Astrophysics Data System (ADS)
Li, Minghui; Hayward, Gordon
2016-02-01
Coarse grain materials are widely used in a variety of key industrial sectors like energy, oil and gas, and aerospace due to their attractive properties. However, when these materials are inspected using ultrasound, the flaw echoes are usually contaminated by high-level, correlated grain noise originating from the material microstructures, which is time-invariant and demonstrates similar spectral characteristics as flaw signals. As a result, the reliable inspection of such materials is highly challenging. In this paper, we present a method for reliable ultrasonic non-destructive evaluation (NDE) of coarse grain materials using matched filters, where the filter is designed to approximate and match the unknown defect echoes, and a particle swarm optimization (PSO) paradigm is employed to search for the optimal parameters in the filter response with an objective to maximise the output signal-to-noise ratio (SNR). Experiments with a 128-element 5MHz transducer array on mild steel and INCONEL Alloy 617 samples are conducted, and the results confirm that the SNR of the images is improved by about 10-20 dB if the optimized matched filter is applied to all the A-scan waveforms prior to image formation. Furthermore, the matched filter can be implemented in real-time with low extra computational cost.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2010-12-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2011-05-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Boudet, Samuel; Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe
2014-01-01
Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967
Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe
2014-01-01
Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30?Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30?Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967
Marginalized Particle Filter for Blind Signal Detection with Analog Imperfections
NASA Astrophysics Data System (ADS)
Yoshida, Yuki; Hayashi, Kazunori; Sakai, Hideaki; Bocquet, Wladimir
Recently, the marginalized particle filter (MPF) has been applied to blind symbol detection problems over selective fading channels. The MPF can ease the computational burden of the standard particle filter (PF) while offering better estimates compared with the standard PF. In this paper, we investigate the application of the blind MPF detector to more realistic situations where the systems suffer from analog imperfections which are non-linear signal distortion due to the inaccurate analog circuits in wireless devices. By reformulating the system model using the widely linear representation and employing the auxiliary variable resampling (AVR) technique for estimation of the imperfections, the blind MPF detector is successfully modified to cope with the analog imperfections. The effectiveness of the proposed MPF detector is demonstrated via computer simulations.
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; Hopson, T. M.; Wojick, R.
2011-12-01
We present results of data assimilation of ground discharge observation and remotely sensed soil moisture observations into Sacramento Soil Moisture Accounting (SACSMA) model in a small watershed (1593 km2) in Minnesota, the Unites States. Specifically, we perform assimilation experiments with Ensemble Kalman Filter (EnKF) and Particle Filter (PF) in order to improve streamflow forecast accuracy at six hourly time step. The EnKF updates the soil moisture states in the SACSMA from the relative errors of the model and observations, while the PF adjust the weights of the state ensemble members based on the likelihood of the forecast. Results of the improvements of each filter over the reference model (without data assimilation) will be presented. Finally, the EnKF and PF are coupled together to further improve the streamflow forecast accuracy.
Improving the LPJ-GUESS modelled carbon balance with a particle filter data assimilation technique
NASA Astrophysics Data System (ADS)
McRobert, Andrew; Scholze, Marko; Kemp, Sarah; Smith, Ben
2015-04-01
The recent increases in anthropogenic carbon dioxide (CO_2) emissions have disrupted the equilibrium in the global carbon cycle pools with the ocean and terrestrial pools increasing their respective storages to accommodate roughly half of the anthropogenic increase. Dynamic global vegetation models (DGVM) have been developed to quantify the modern carbon cycle changes. In this study, a particle filter data assimilation technique has been used to calibrate the process parameters in the DGVM LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator). LPJ-GUESS simulates individual plant function types (pft) as a competitive balance within high resolution forest patches. Thirty process parameters have been optimized twice, using both a sequential and iterative method of particle filter. The iterative method runs the model for the full time period of thirteen years and then evaluates the cost function from the mismatch of observations and model results before adjusting the parameters and repeating the full time period. The sequential method runs the model and particle filter for each year of the time series in order, adjusting the parameters between each year, then loops back to beginning of the series to repeat. For each particle, the model output of NEP (Net Ecosystem Productivity) is compared to eddy flux measurements from ICOS flux towers to minimize the cost function. A high-resolution regional carbon balance has been simulated for central Sweden using a network of several ICOS flux towers.
A multi-dimensional procedure for BNCT filter optimization
Lille, R.A.
1998-02-01
An initial version of an optimization code utilizing two-dimensional radiation transport methods has been completed. This code is capable of predicting material compositions of a beam tube-filter geometry which can be used in a boron neutron capture therapy treatment facility to improve the ratio of the average radiation dose in a brain tumor to that in the healthy tissue surrounding the tumor. The optimization algorithm employed by the code is very straightforward. After an estimate of the gradient of the dose ratio with respect to the nuclide densities in the beam tube-filter geometry is obtained, changes in the nuclide densities are made based on: (1) the magnitude and sign of the components of the dose ratio gradient, (2) the magnitude of the nuclide densities, (3) the upper and lower bound of each nuclide density, and (4) the linear constraint that the sum of the nuclide density fractions in each material zone be less than or equal to 1.0. A local optimal solution is assumed to be found when one of the following conditions is satisfied in every material zone: (1) the maximum positive component of the gradient corresponds to a nuclide at its maximum density and the sum of the density fractions equals 1.0 or, and (2) the positive and negative components of the gradient correspond to nuclides densities at their upper and lower bounds, respectively, and the remaining components of the gradient are sufficiently small. The optimization procedure has been applied to a beam tube-filter geometry coupled to a simple tumor-patient head model and an improvement of 50% in the dose ratio was obtained.
Ridge filter design for a particle therapy line
NASA Astrophysics Data System (ADS)
Kim, Chang Hyeuk; Han, Garam; Lee, Hwa-Ryun; Kim, Hyunyong; Jang, Hong Suk; Kim, Jeong Hwan; Park, Dong Wook; Jang, Sea Duk; Hwang, Won Taek; Kim, Geun-Beom; Yang, Tae-Keun
2014-05-01
The beam irradiation system for particle therapy can use a passive or an active beam irradiation method. In the case of an active beam irradiation, using a ridge filter would be appropriate to generate a spread-out Bragg peak (SOBP) through a large scanning area. For this study, a ridge filter was designed as an energy modulation device for a prototype active scanning system at MC-50 in Korea Institute of Radiological And Medical Science (KIRAMS). The ridge filter was designed to create a 10 mm of SOBP for a 45-MeV proton beam. To reduce the distal penumbra and the initial dose, [DM] determined the weighting factor for Bragg Peak by applying an in-house iteration code and the Minuit Fit package of Root. A single ridge bar shape and its corresponding thickness were obtained through 21 weighting factors. Also, a ridge filter was fabricated to cover a large scanning area (300 Ã— 300 mm2) by Polymethyl Methacrylate (PMMA). The fabricated ridge filter was tested at the prototype active beamline of MC-50. The SOBP and the incident beam distribution were obtained by using HD-810 GaF chromatic film placed at a right triangle to the PMMA block. The depth dose profile for the SOBP can be obtained precisely by using the flat field correction and measuring the 2-dimensional distribution of the incoming beam. After the flat field correction is used, the experimental results show that the SOBP region matches with design requirement well, with 0.62% uniformity.
Independent motion detection with a rival penalized adaptive particle filter
NASA Astrophysics Data System (ADS)
Becker, Stefan; HÃ¼bner, Wolfgang; Arens, Michael
2014-10-01
Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.
Tracking low SNR targets using particle filter with flow control
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2014-06-01
In this work we study the problem of detecting and tracking challenging targets that exhibit low signal-to-noise ratios (SNR). We have developed a particle filter-based track-before-detect (TBD) algorithm for tracking such dim targets. The approach incorporates the most recent state estimates to control the particle flow accounting for target dynamics. The flow control enables accumulation of signal information over time to compensate for target motion. The performance of this approach is evaluated using a sensitivity analysis based on varying target speed and SNR values. This analysis was conducted using high-fidelity sensor and target modeling in realistic scenarios. Our results show that the proposed TBD algorithm is capable of tracking targets in cluttered images with SNR values much less than one.
Loss of fine particle ammonium from denuded nylon filters
NASA Astrophysics Data System (ADS)
Yu, Xiao-Ying; Lee, Taehyoung; Ayres, Benjamin; Kreidenweis, Sonia M.; Malm, William; Collett, Jeffrey L.
Ammonium is an important constituent of fine particulate mass in the atmosphere, but can be difficult to quantify due to possible sampling artifacts. Losses of semivolatile species such as NH 4NO 3 can be particularly problematic. In order to evaluate ammonium losses from aerosol particles collected on filters, a series of field experiments was conducted using denuded nylon and Teflon filters at Bondville, IL (February 2003), San Gorgonio, CA (April 2003 and July 2004), Grand Canyon NP, AZ (May, 2003), Brigantine, NJ (November 2003), and Great Smoky Mountains National Park (NP), TN (July-August 2004). Samples were collected over 24 h periods. Losses from denuded nylon filters ranged from 10% (monthly average) in Bondville, IL to 28% in San Gorgonio, CA in summer. Losses on individual sample days ranged from 1% to 65%. Losses tended to increase with increasing diurnal temperature and relative humidity changes and with the fraction of ambient total N(-III) (particulate NH 4++gaseous NH 3) present as gaseous NH 3. The amount of ammonium lost at most sites could be explained by the amount of NH 4NO 3 present in the sampled aerosol. Ammonium losses at Great Smoky Mountains NP, however, significantly exceeded the amount of NH 4NO 3 collected. Ammoniated organic salts are suggested as additional important contributors to observed ammonium loss at this location.
Loss of Fine Particle Ammonium from Denuded Nylon Filters
Yu, Xiao-Ying; Lee, Taehyoung; Ayres, Benjamin; Kreidenweis, Sonia M.; Malm, William C.; Collett, Jeffrey L.
2006-08-01
Ammonium is an important constituent of fine particulate mass in the atmosphere, but can be difficult to quantify due to possible sampling artifacts. Losses of semivolatile species such as NH4NO3 can be particularly problematic. In order to evaluate ammonium losses from aerosol particles collected on filters, a series of field experiments was conducted using denuded nylon and Teflon filters at Bondville, Illinois (February 2003), San Gorgonio, California (April 2003 and July 2004), Grand Canyon National Park, Arizona (May, 2003), Brigantine, New Jersey (November 2003), and Great Smoky Mountains National Park (NP), Tennessee (Julyâ€“August 2004). Samples were collected over 24-hr periods. Losses from denuded nylon filters ranged from 10% (monthly average) in Bondville, Illinois to 28% in San Gorgonio, California in summer. Losses on individual sample days ranged from 1% to 65%. Losses tended to increase with increasing diurnal temperature and relative humidity changes and with the fraction of ambient total N(--III) (particulate NH4+ plus gaseous NH3) present as gaseous NH3. The amount of ammonium lost at most sites could be explained by the amount of NH4NO3 present in the sampled aerosol. Ammonium losses at Great Smoky Mountains NP, however, significantly exceeded the amount of NH4NO3 collected. Ammoniated organic salts are suggested as additional important contributors to observed ammonium loss at this location.
Emergent system identification using particle swarm optimization
NASA Astrophysics Data System (ADS)
Voss, Mark S.; Feng, Xin
2001-10-01
Complex Adaptive Structures can be viewed as a combination of Complex Adaptive Systems and fully integrated autonomous Smart Structures. Traditionally when designing a structure, one combines rules of thumb with theoretical results to develop an acceptable solution. This methodology will have to be extended for Complex Adaptive Structures, since they, by definition, will participate in their own design. In this paper we introduce a new methodology for Emergent System Identification that is concerned with combining the methodologies of self-organizing functional networks (GMDH - Alexy G. Ivakhnenko), Particle Swarm Optimization (PSO - James Kennedy and Russell C. Eberhart) and Genetic Programming (GP - John Koza). This paper will concentrate on the utilization of Particle Swarm Optimization in this effort and discuss how Particle Swarm Optimization relates to our ultimate goal of emergent self-organizing functional networks that can be used to identify overlapping internal structural models. The ability for Complex Adaptive Structures to identify emerging internal models will be a key component for their success.
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
Nonlinear EEG Decoding Based on a Particle Filter Model
Hong, Jun
2014-01-01
While the world is stepping into the aging society, rehabilitation robots play a more and more important role in terms of both rehabilitation treatment and nursing of the patients with neurological diseases. Benefiting from the abundant contents of movement information, electroencephalography (EEG) has become a promising information source for rehabilitation robots control. Although the multiple linear regression model was used as the decoding model of EEG signals in some researches, it has been considered that it cannot reflect the nonlinear components of EEG signals. In order to overcome this shortcoming, we propose a nonlinear decoding model, the particle filter model. Two- and three-dimensional decoding experiments were performed to test the validity of this model. In decoding accuracy, the results are comparable to those of the multiple linear regression model and previous EEG studies. In addition, the particle filter model uses less training data and more frequency information than the multiple linear regression model, which shows the potential of nonlinear decoding models. Overall, the findings hold promise for the furtherance of EEG-based rehabilitation robots. PMID:24949420
The new approach for infrared target tracking based on the particle filter algorithm
NASA Astrophysics Data System (ADS)
Sun, Hang; Han, Hong-xia
2011-08-01
Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.
Design of a higher-order digital differentiator using a particle swarm optimization approach
NASA Astrophysics Data System (ADS)
Chang, Wei-Der; Chang, Dai-Ming
2008-01-01
This paper applies a novel optimization algorithm, particle swarm optimization (PSO), to design a higher-order differentiator that contains two different structures of even and odd orders. Four cases of linear phase finite impulse response (FIR) filters are designed to match the prescribed differentiation frequency response by using PSO algorithm. The algorithm with real-valued manipulations uses the velocity updating and position updating formulas to optimally solve the impulse response of the filter to the digital differentiator design problem. Simulation results reveals that the proposed method provides much better design performance than the well-known McClellan-Parks method.
Achieving sub-nanometre particle mapping with energy-filtered TEM.
Lozano-Perez, S; de Castro Bernal, V; Nicholls, R J
2009-09-01
A combination of state-of-the-art instrumentation and optimized data processing has enabled for the first time the chemical mapping of sub-nanometre particles using energy-filtered transmission electron microscopy (EFTEM). Multivariate statistical analysis (MSA) generated reconstructed datasets where the signal from particles smaller than 1 nm in diameter was successfully isolated from the original noisy background. The technique has been applied to the characterization of oxide dispersion strengthened (ODS) reduced activation FeCr alloys, due to their relevance as structural materials for future fusion reactors. Results revealed that most nanometer-sized particles had a core-shell structure, with an Yttrium-Chromium-Oxygen-rich core and a nano-scaled Chromium-Oxygen-rich shell. This segregation to the nanoparticles caused a decrease of the Chromium dissolved in the matrix, compromising the corrosion resistance of the alloy. PMID:19505762
Numerical analysis of particle distribution on multi-pipe ceramic candle filters
NASA Astrophysics Data System (ADS)
Li, H. X.; Gao, B. G.; Tie, Z. X.; Sun, Z. J.; Wang, F. H.
2010-03-01
The particle distribution on the ceramic filter surface has great effect on filtration performance. The numerical simulation method is used to analyze the particle distribution near the filter surface under different operation conditions. The gas/solid two-phase flow field in the ceramic filter vessel was simulated using the Eulerian two-fluid model provided by FLUENT code. The user-defined function was loaded with the FLUTNT solver to define the interaction between the particle and the gas near the porous ceramic candle filter. The distribution of the filter cake along the filter length and around the filter circumference was analyzed. The simulation results agree well with experimental data. The simulation model can be used to predict the particle distribution and provide theory direction for the engineering application of porous ceramic filters.
Particle Swarm Optimization with Double Learning Patterns
Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian
2016-01-01
Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747
Cardiac-phase filtering in intracardiac particle image velocimetry
NASA Astrophysics Data System (ADS)
Jamison, R. Aidan; Fouras, Andreas; Bryson-Richardson, Robert J.
2012-03-01
The ability to accurately measure velocity within the embryonic zebrafish heart, at high spatial and temporal resolution, enables further insight into the effects of hemodynamics on heart development. Unfortunately, currently available techniques are unable to provide the required resolution, both spatial and temporal, for detailed analysis. Advances in imaging hardware are allowing bright field imaging combined with particle image velocimetry to become a viable technique for the broader community at the required spatial and temporal resolutions. While bright field imaging offers the necessary temporal resolution, this approach introduces heart wall artifacts that interfere with accurate velocity measurement. This study presents a technique for cardiac-phase filtering of bright field images to remove the heart wall and improve velocimetry measurements. Velocity measurements were acquired for zebrafish embryos ranging from 3 to 6 days postfertilization. Removal of the heart wall was seen to correct a severe (3-fold) underestimation in velocity measurements obtained without filtering. Additionally, velocimetry measurements were used to quantitatively detect developmental changes in cardiac performance in vivo, investigating both changes in contractile period and maximum velocities present through the ventricular-bulbar valve.
Object tracking with particle filter in UAV video
NASA Astrophysics Data System (ADS)
Yu, Wenshuai; Yin, Xiaodong; Chen, Bing; Xie, Jinhua
2013-10-01
Aerial surveillance is a main functionality of UAV, which is realized via video camera. During the operations, the mission assigned targets always are the kinetic objects, such as people or vehicles. Therefore, object tracking is taken as the key techniques for UAV sensor payload. Two difficulties for UAV object tracking are dynamic background and hardly predicting target's motion. To solve the problems, it employed the particle filter in the research. Modeling the target by its characteristics, for instance, color features, it approximates the possibility density of target state with weighting sample sets, and the state vector contains position, motion vector and region parameters. The experiments demonstrate the effectiveness and robustness of the proposed method in UAV video tracking.
Using triaxial magnetic fields to create optimal particle composites.
Martin, James Ellis
2004-05-01
The properties of a particle composite can be controlled by organizing the particles into assemblies. The properties of the composite will depend on the structure of the particle assemblies, and for any give property there is some optimal structure. Through simulation and experiment we show that the application of heterodyned triaxial magnetic or electric fields generates structures that optimize the magnetic and dielectric properties of particle composites. We suggest that optimizing these properties optimizes other properties, such as transport properties, and we give as one example of this optimization the magnetostriction of magnetic particle composites formed in a silicone elastomer.
Numerical simulation of DPF filter for selected regimes with deposited soot particles
NASA Astrophysics Data System (ADS)
Lávi?ka, David; Kova?ík, Petr
2012-04-01
For the purpose of accumulation of particulate matter from Diesel engine exhaust gas, particle filters are used (referred to as DPF or FAP filters in the automotive industry). However, the cost of these filters is quite high. As the emission limits become stricter, the requirements for PM collection are rising accordingly. Particulate matters are very dangerous for human health and these are not invisible for human eye. They can often cause various diseases of the respiratory tract, even what can cause lung cancer. Performed numerical simulations were used to analyze particle filter behavior under various operating modes. The simulations were especially focused on selected critical states of particle filter, when engine is switched to emergency regime. The aim was to prevent and avoid critical situations due the filter behavior understanding. The numerical simulations were based on experimental analysis of used diesel particle filters.
Human behavior-based particle swarm optimization.
Liu, Hao; Xu, Gang; Ding, Gui-Yan; Sun, Yu-Bo
2014-01-01
Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients c 1 and c 2 in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost. PMID:24883357
Human Behavior-Based Particle Swarm Optimization
Xu, Gang; Ding, Gui-yan; Sun, Yu-bo
2014-01-01
Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients c1 and c2 in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost. PMID:24883357
Optimal Filtering in Mass Transport Modeling From Satellite Gravimetry Data
NASA Astrophysics Data System (ADS)
Ditmar, P.; Hashemi Farahani, H.; Klees, R.
2011-12-01
Monitoring natural mass transport in the Earth's system, which has marked a new era in Earth observation, is largely based on the data collected by the GRACE satellite mission. Unfortunately, this mission is not free from certain limitations, two of which are especially critical. Firstly, its sensitivity is strongly anisotropic: it senses the north-south component of the mass re-distribution gradient much better than the east-west component. Secondly, it suffers from a trade-off between temporal and spatial resolution: a high (e.g., daily) temporal resolution is only possible if the spatial resolution is sacrificed. To make things even worse, the GRACE satellites enter occasionally a phase when their orbit is characterized by a short repeat period, which makes it impossible to reach a high spatial resolution at all. A way to mitigate limitations of GRACE measurements is to design optimal data processing procedures, so that all available information is fully exploited when modeling mass transport. This implies, in particular, that an unconstrained model directly derived from satellite gravimetry data needs to be optimally filtered. In principle, this can be realized with a Wiener filter, which is built on the basis of covariance matrices of noise and signal. In practice, however, a compilation of both matrices (and, therefore, of the filter itself) is not a trivial task. To build the covariance matrix of noise in a mass transport model, it is necessary to start from a realistic model of noise in the level-1B data. Furthermore, a routine satellite gravimetry data processing includes, in particular, the subtraction of nuisance signals (for instance, associated with atmosphere and ocean), for which appropriate background models are used. Such models are not error-free, which has to be taken into account when the noise covariance matrix is constructed. In addition, both signal and noise covariance matrices depend on the type of mass transport processes under investigation. For instance, processes of hydrological origin occur at short time scales, so that the input time series is typically short (1 month or less), which implies a relatively strong noise in the derived model. On the contrary, study of a long-term ice mass depletion requires a long time series of satellite data, which leads to a reduction of noise in the mass transport model. Of course, the spatial pattern (and therefore, the signal covariance matrices) of various mass transport processes are also very different. In the presented study, we compare various strategies to build the signal and noise covariance matrices in the context of mass transport modeling. In this way, we demonstrate the benefits of an accurate construction of an optimal filter as outlined above, compared to simplified strategies. Furthermore, we consider both models based on GRACE data alone and combined GRACE/GOCE models. In this way, we shed more light on a potential synergy of the GRACE and GOCE satellite mission. This is important nor only for the best possible mass transport modeling on the basis of all available data, but also for the optimal planning of future satellite gravity missions.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (?p), stopband error (?s), transition band error (?t), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647
NASA Astrophysics Data System (ADS)
Zuccaro, G.; Lapenta, G.; Ferrero, F.; Maizza, G.
2011-02-01
In the diesel particulate filters technology a key aspect is represented by the properties of the particulate matter that is collected inside their structure. The work presented is focused on the development of an innovative mathematical tool based on the particle-in-cell method (PIC) for the simulation of the soot distribution inside a single channel of a diesel particulate filter. The basic fluid dynamic equations are solved for the gas phase inside the channel using a novel technique based on the solution of the same set of equations everywhere in the system including the porous medium. This approach is presented as alternative to the more conventional methods of matching conditions across the boundary of the porous region where a Darcy-like flow is developed. The motion of the soot solid particles is instead described through a particle-by-particle approach based on Newton's equations of motion. The coupling between the dynamics of the gas and that of the soot particles, i.e. between these two sub-models, is performed through the implementation of the particle-in-cell technique. This model allows the detailed simulation of the deposition and compaction of the soot inside the filter channels and its characterization in terms of density, permeability and thickness. The model then represents a unique tool for the optimization of the design of diesel particulate filters. The details of the technique implementation and some paradigmatic examples will be shown.
Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, Brian
2013-01-01
The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.
Yang, Juan; Stewart, Marc; Maupin, Gary D.; Herling, Darrell R.; Zelenyuk, Alla
2009-04-15
Diesel offers higher fuel efficiency, but produces higher exhaust particulate matter. Diesel particulate filters are presently the most efficient means to reduce these emissions. These filters typically trap particles in two basic modes: at the beginning of the exposure cycle the particles are captured in the filter holes, and at longer times the particles form a "cake" on which particles are trapped. Eventually the "cake" removed by oxidation and the cycle is repeated. We have investigated the properties and behavior of two commonly used filters: silicon carbide (SiC) and cordierite (DuraTrap® RC) by exposing them to nearly-spherical ammonium sulfate particles. We show that the transition from deep bed filtration to "cake" filtration can easily be identified by recording the change in pressure across the filters as a function of exposure. We investigated performance of these filters as a function of flow rate and particle size. The filters trap small and large particles more efficiently than particles that are ~80 to 200 nm in aerodynamic diameter. A comparison between the experimental data and a simulation using incompressible lattice-Boltzmann model shows very good qualitative agreement, but the model overpredicts the filter’s trapping efficiency.
PARTICLE REMOVAL AND HEAD LOSS DEVELOPMENT IN BIOLOGICAL FILTERS
The physical performance of granular media filters was studied under pre-chlorinated, backwash-chlorinated, and nonchlorinated conditions. Overall, biological filteration produced a high-quality water. Although effluent turbidities showed littleer difference between the perform...
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
NASA Astrophysics Data System (ADS)
Chen, Jing; Liu, Tundong; Jiang, Hao
2016-01-01
A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.
Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V
2015-01-01
Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690
Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V.
2015-01-01
Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690
Effect of embedded unbiasedness on discrete-time optimal FIR filtering estimates
NASA Astrophysics Data System (ADS)
Zhao, Shunyi; Shmaliy, Yuriy S.; Liu, Fei; Ibarra-Manzano, Oscar; Khan, Sanowar H.
2015-12-01
Unbiased estimation is an efficient alternative to optimal estimation when the noise statistics are not fully known and/or the model undergoes temporary uncertainties. In this paper, we investigate the effect of embedded unbiasedness (EU) on optimal finite impulse response (OFIR) filtering estimates of linear discrete time-invariant state-space models. A new OFIR-EU filter is derived by minimizing the mean square error (MSE) subject to the unbiasedness constraint. We show that the OFIR-UE filter is equivalent to the minimum variance unbiased FIR (UFIR) filter. Unlike the OFIR filter, the OFIR-EU filter does not require the initial conditions. In terms of accuracy, the OFIR-EU filter occupies an intermediate place between the UFIR and OFIR filters. Contrary to the UFIR filter which MSE is minimized by the optimal horizon of N opt points, the MSEs in the OFIR-EU and OFIR filters diminish with N and these filters are thus full-horizon. Based upon several examples, we show that the OFIR-UE filter has higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR and Kalman filters.
Pilz, T.
1995-12-31
For power generation with combined cycles or production of so called advanced materials by vapor phase synthesis particle separation at high temperatures is of crucial importance. There, systems working with rigid ceramic barrier filters are either of thermodynamical benefit to the process or essential for producing materials with certain properties. A hot gas filter test rig has been installed to investigate the influence of different parameters e.g. temperature, dust properties, filter media and filtration and regeneration conditions into particle separation at high temperatures. These tests were conducted both with commonly used filter candles and with filter discs made out of the same material. The filter disc is mounted at one side of the test rig. That is why both filters face the same raw gas conditions. The filter disc is flown through by a cross flow arrangement. This bases upon the conviction that for comparison of filtration characteristics of candles with filter discs or other model filters the structure of the dust cakes have to be equal. This way of conducting investigations into the influence of the above mentioned parameters on dust separation at high temperatures follows the new standard VDI 3926. There, test procedures for the characterization of filter media at ambient conditions are prescribed. The paper mainly focuses then on the influence of particle properties (e.g. stickiness etc.) upon the filtration and regeneration behavior of fly ashes with rigid ceramic filters.
Ultrafine particle removal by residential heating, ventilating, and air-conditioning filters.
Stephens, B; Siegel, J A
2013-12-01
This work uses an in situ filter test method to measure the size-resolved removal efficiency of indoor-generated ultrafine particles (approximately 7-100Â nm) for six new commercially available filters installed in a recirculating heating, ventilating, and air-conditioning (HVAC) system in an unoccupied test house. The fibrous HVAC filters were previously rated by the manufacturers according to ASHRAE Standard 52.2 and ranged from shallow (2.5Â cm) fiberglass panel filters (MERV 4) to deep-bed (12.7Â cm) electrostatically charged synthetic media filters (MERV 16). Measured removal efficiency ranged from 0 to 10% for most ultrafine particles (UFP) sizes with the lowest rated filters (MERV 4 and 6) to 60-80% for most UFP sizes with the highest rated filter (MERV 16). The deeper bed filters generally achieved higher removal efficiencies than the panel filters, while maintaining a low pressure drop and higher airflow rate in the operating HVAC system. Assuming constant efficiency, a modeling effort using these measured values for new filters and other inputs from real buildings shows that MERV 13-16 filters could reduce the indoor proportion of outdoor UFPs (in the absence of indoor sources) by as much as a factor of 2-3 in a typical single-family residence relative to the lowest efficiency filters, depending in part on particle size. PMID:23590456
Multi-path light extinction approach for high efficiency filtered oil particle measurement
NASA Astrophysics Data System (ADS)
Pengfei, Yin; Jun, Chen; Huinan, Yang; Lili, Liu; Xiaoshu, Cai
2014-04-01
This work present a multi-pathlight extinction approach to determine the oil mist filter efficiency based on measuring the concentration and size distribution of oil particles. Light extinction spectrum(LES) technique was used to retrieve the oil particle size distribution and concentration. The multi-path measuring cell was designed to measure low concentration and fine particles after filtering. The path-length of the measuring cell calibrated as 200 cm. The results of oil particle size with oil mist filtering were obtained as D32 = 0.9?m. Cv=1.6×10-8.
Optimization of the performances of correlation filters by pre-processing the input plane
NASA Astrophysics Data System (ADS)
Bouzidi, F.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Fakhfakh, A.
2016-01-01
We report findings on the optimization of the performances of correlation filters. First, we propound and validate an optimization of ROC curves adapted to correlation technique. Then, analysis suggests that a pre-processing of the input plane leads to a compromise between the robustness of the adapted filter and the discrimination of the inverse filter for face recognition applications. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase the performances of a VanderLugt correlator.
Continuous and Discrete Space Particle Filters for Predictions in Acoustic Positioning
NASA Astrophysics Data System (ADS)
Bauer, Will; Kim, Surrey; Kouritzin, Michael A.
2002-12-01
Predicting the future state of a random dynamic signal based on corrupted, distorted, and partial observations is vital for proper real-time control of a system that includes time delay. Motivated by problems from Acoustic Positioning Research Inc., we consider the continual automated illumination of an object moving within a bounded domain, which requires object location prediction due to inherent mechanical and physical time lags associated with robotic lighting. Quality computational predictions demand high fidelity models for the coupled moving object signal and observation equipment pair. In our current problem, the signal represents the vector position, orientation, and velocity of a stage performer. Acoustic observations are formed by timing ultrasonic waves traveling from four perimeter speakers to a microphone attached to the performer. The goal is to schedule lighting movements that are coordinated with the performer by anticipating his/her future position based upon these observations using filtering theory. Particle system based methods have experienced rapid development and have become an essential technique of contemporary filtering strategies. Hitherto, researchers have largely focused on continuous state particle filters, ranging from traditional weighted particle filters to adaptive refining particle filters, readily able to perform path-space estimation and prediction. Herein, we compare the performance of a state-of-the-art refining particle filter to that of a novel discrete-space particle filter on the acoustic positioning problem. By discrete space particle filter we mean a Markov chain that counts particles in discretized cells of the signal state space in order to form an approximated unnormalized distribution of the signal state. For both filters mentioned above, we will examine issues like the mean time to localize a signal, the fidelity of filter estimates at various signal to noise ratios, computational costs, and the effect of signal fading; furthermore, we will provide visual demonstrations of filter performance.
Optimal digital filters for long-latency components of the event-related brain potential.
Farwell, L A; Martinerie, J M; Bashore, T R; Rapp, P E; Goddard, P H
1993-05-01
A fundamentally important problem for cognitive psychophysiologists is selection of the appropriate off-line digital filter to extract signal from noise in the event-related brain potential (ERP) recorded at the scalp. Investigators in the field typically use a type of finite impulse response (FIR) filter known as moving average or boxcar filter to achieve this end. However, this type of filter can produce significant amplitude diminution and distortion of the shape of the ERP waveform. Thus, there is a need to identify more appropriate filters. In this paper, we compare the performance of another type of FIR filter that, unlike the boxcar filter, is designed with an optimizing algorithm that reduces signal distortion and maximizes signal extraction (referred to here as an optimal FIR filter). We applied several different filters of both types to ERP data containing the P300 component. This comparison revealed that boxcar filters reduced the contribution of high-frequency noise to the ERP but in so doing produced a substantial attenuation of P300 amplitude and, in some cases, substantial distortions of the shape of the waveform, resulting in significant errors in latency estimation. In contrast, the optimal FIR filters preserved P300 amplitude, morphology, and latency and also eliminated high-frequency noise more effectively than did the boxcar filters. The implications of these results for data acquisition and analysis are discussed. PMID:8497560
Particle filtering methods for georeferencing panoramic image sequence in complex urban scenes
NASA Astrophysics Data System (ADS)
Ji, Shunping; Shi, Yun; Shan, Jie; Shao, Xiaowei; Shi, Zhongchao; Yuan, Xiuxiao; Yang, Peng; Wu, Wenbin; Tang, Huajun; Shibasaki, Ryosuke
2015-07-01
Georeferencing image sequences is critical for mobile mapping systems. Traditional methods such as bundle adjustment need adequate and well-distributed ground control points (GCP) when accurate GPS data are not available in complex urban scenes. For applications of large areas, automatic extraction of GCPs by matching vehicle-born image sequences with geo-referenced ortho-images will be a better choice than intensive GCP collection with field surveying. However, such image matching generated GCP's are highly noisy, especially in complex urban street environments due to shadows, occlusions and moving objects in the ortho images. This study presents a probabilistic solution that integrates matching and localization under one framework. First, a probabilistic and global localization model is formulated based on the Bayes' rules and Markov chain. Unlike many conventional methods, our model can accommodate non-Gaussian observation. In the next step, a particle filtering method is applied to determine this model under highly noisy GCP's. Owing to the multiple hypotheses tracking represented by diverse particles, the method can balance the strength of geometric and radiometric constraints, i.e., drifted motion models and noisy GCP's, and guarantee an approximately optimal trajectory. Carried out tests are with thousands of mobile panoramic images and aerial ortho-images. Comparing with the conventional extended Kalman filtering and a global registration method, the proposed approach can succeed even under more than 80% gross errors in GCP's and reach a good accuracy equivalent to the traditional bundle adjustment with dense and precise control.
Cosmological parameter estimation using particle swarm optimization
NASA Astrophysics Data System (ADS)
Prasad, Jayanti; Souradeep, Tarun
2012-06-01
Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit ? cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.
ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435
Stillo, Andrew; Ricketts, Craig I.
2013-07-01
High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacity of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1 Code sections FC and FK[1]. (authors)
NASA Astrophysics Data System (ADS)
Khuzhayorov, B. Kh.
2011-11-01
Equations of filtration of suspensions to form an incompressible cake of particles on the surface of the filter with simultaneous passage of a certain share of the particles from the cake to the filter's pore space and next to the region of a filtered liquid are derived from the principles of the mechanics of multiphase media. The influence of the travel of the particles in the region of the cake and the filter on the dynamics of growth of the cake bed is investigated. An analysis of the derived dynamic filtration equations shows that allowance for the factors of travel and accumulation of particles in the cake and the filter causes their total filtration resistance, in particular the resistance in the inertial component of the filtration law, to decrease.
The Optimal Design of Weighted Order Statistics Filters by Using Support Vector Machines
NASA Astrophysics Data System (ADS)
Yao, Chih-Chia; Yu, Pao-Ta
2006-12-01
Support vector machines (SVMs), a classification algorithm for the machine learning community, have been shown to provide higher performance than traditional learning machines. In this paper, the technique of SVMs is introduced into the design of weighted order statistics (WOS) filters. WOS filters are highly effective, in processing digital signals, because they have a simple window structure. However, due to threshold decomposition and stacking property, the development of WOS filters cannot significantly improve both the design complexity and estimation error. This paper proposes a new designing technique which can improve the learning speed and reduce the complexity of designing WOS filters. This technique uses a dichotomous approach to reduce the Boolean functions from 255 levels to two levels, which are separated by an optimal hyperplane. Furthermore, the optimal hyperplane is gotten by using the technique of SVMs. Our proposed method approximates the optimal weighted order statistics filters more rapidly than the adaptive neural filters.
NASA Astrophysics Data System (ADS)
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing
2014-12-01
Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.
NASA Astrophysics Data System (ADS)
Zaugg, David A.; Samuel, Alphonso A.; Waagen, Donald E.; Schmitt, Harry A.
2004-07-01
Bearings-only tracking is widely used in the defense arena. Its value can be exploited in systems using optical sensors and sonar, among others. Non-linearity and non-Gaussian prior statistics are among the complications of bearings-only tracking. Several filters have been used to overcome these obstacles, including particle filters and multiple hypothesis extended Kalman filters (MHEKF). Particle filters can accommodate a wide range of distributions and do not need to be linearized. Because of this they seem ideally suited for this problem. A MHEKF can only approximate the prior distribution of a bearings-only tracking scenario and needs to be linearized. However, the likelihood distribution maintained for each MHEKF hypothesis demonstrates significant memory and lends stability to the algorithm, potentially enhancing tracking convergence. Also, the MHEKF is insensitive to outliers. For the scenarios under investigation, the sensor platform is tracking a moving and a stationary target. The sensor is allowed to maneuver in an attempt to maximize tracking performance. For these scenarios, we compare and contrast the acquisition time and mean-squared tracking error performance characteristics of particle filters and MHEKF via Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul
2015-03-01
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
Goodarz Ahmadi
2002-07-01
In this project, a computational modeling approach for analyzing flow and ash transport and deposition in filter vessels was developed. An Eulerian-Lagrangian formulation for studying hot-gas filtration process was established. The approach uses an Eulerian analysis of gas flows in the filter vessel, and makes use of the Lagrangian trajectory analysis for the particle transport and deposition. Particular attention was given to the Siemens-Westinghouse filter vessel at Power System Development Facility in Wilsonville in Alabama. Details of hot-gas flow in this tangential flow filter vessel are evaluated. The simulation results show that the rapidly rotation flow in the spacing between the shroud and the vessel refractory acts as cyclone that leads to the removal of a large fraction of the larger particles from the gas stream. Several alternate designs for the filter vessel are considered. These include a vessel with a short shroud, a filter vessel with no shroud and a vessel with a deflector plate. The hot-gas flow and particle transport and deposition in various vessels are evaluated. The deposition patterns in various vessels are compared. It is shown that certain filter vessel designs allow for the large particles to remain suspended in the gas stream and to deposit on the filters. The presence of the larger particles in the filter cake leads to lower mechanical strength thus allowing for the back-pulse process to more easily remove the filter cake. A laboratory-scale filter vessel for testing the cold flow condition was designed and fabricated. A laser-based flow visualization technique is used and the gas flow condition in the laboratory-scale vessel was experimental studied. A computer model for the experimental vessel was also developed and the gas flow and particle transport patterns are evaluated.
A particle filtering approach for spatial arrival time tracking in ocean acoustics.
Jain, Rashi; Michalopoulou, Zoi-Heleni
2011-06-01
The focus of this work is on arrival time and amplitude estimation from acoustic signals recorded at spatially separated hydrophones in the ocean. A particle filtering approach is developed that treats arrival times as "targets" and tracks their "location" across receivers, also modeling arrival time gradient. The method is evaluated via Monte Carlo simulations and is compared to a maximum likelihood estimator, which does not relate arrivals at neighboring receivers. The comparison demonstrates a significant advantage in using the particle filter. It is also shown that posterior probability density functions of times and amplitudes become readily available with particle filtering. PMID:21682358
MCMC-based particle filtering for tracking a variable number of interacting targets.
Khan, Zia; Balch, Tucker; Dellaert, Frank
2005-11-01
We describe a particle filter that effectively deals with interacting targets--targets that are influenced by the proximity and/or behavior of other targets. The particle filter includes a Markov random field (MRF) motion prior that helps maintain the identity of targets throughout an interaction, significantly reducing tracker failures. We show that this MRF prior can be easily implemented by including an additional interaction factor in the importance weights of the particle filter. However, the computational requirements of the resulting multitarget filter render it unusable for large numbers of targets. Consequently, we replace the traditional importance sampling step in the particle filter with a novel Markov chain Monte Carlo (MCMC) sampling step to obtain a more efficient MCMC-based multitarget filter. We also show how to extend this MCMC-based filter to address a variable number of interacting targets. Finally, we present both qualitative and quantitative experimental results, demonstrating that the resulting particle filters deal efficiently and effectively with complicated target interactions. PMID:16285378
Farrow structure implementation of fractional delay filter optimal in Chebyshev sense
NASA Astrophysics Data System (ADS)
Blok, Marek
2006-03-01
In this paper the problem of variable delay filter implementation based on the Farrow structure is discussed. The idea of such an implementation is to calculate, for each required delay, coefficients of fractional delay filter impulse response using delay independent polynomials. This approach leads to significant decrease of computational costs in applications which require frequent delay changes. Achieved computational complexity reduction is especially important in case of recursive optimal filters design methods. In this paper we demonstrate that quality and properties of fractional delay filters optimal in Chebyshev sense can be retained even for low orders of the Farrow structure.
An optimal modification of a Kalman filter for time scales
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
2003-01-01
The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.
NASA Astrophysics Data System (ADS)
Fu, Jack; Khoury, Jehad; Cronin-Golomb, Mark; Woods, Charles L.
1995-01-01
Computer simulations of photorefractive thresholding filters for the reduction of artifact or dust noise demonstrate an increase in signal-to-noise ratio (SNR) of 70% to 95%, respectively, of that provided by the Wiener filter for inputs with a SNR of approximately 3. These simple, nearly optimal filters use a spectral thresholding profile that is proportional to the envelope of the noise spectrum. Alternative nonlinear filters with either 1/ nu or constant thresholding profiles increase the SNR almost as much as the noise-envelope thresholding filter.
Optease Vena Cava Filter Optimal Indwelling Time and Retrievability
Rimon, Uri Bensaid, Paul Golan, Gil Garniek, Alexander Khaitovich, Boris; Dotan, Zohar; Konen, Eli
2011-06-15
The purpose of this study was to assess the indwelling time and retrievability of the Optease IVC filter. Between 2002 and 2009, a total of 811 Optease filters were inserted: 382 for prophylaxis in multitrauma patients and 429 for patients with venous thromboembolic (VTE) disease. In 139 patients [97 men and 42 women; mean age, 36 (range, 17-82) years], filter retrieval was attempted. They were divided into two groups to compare change in retrieval policy during the years: group A, 60 patients with filter retrievals performed before December 31 2006; and group B, 79 patients with filter retrievals from January 2007 to October 2009. A total of 128 filters were successfully removed (57 in group A, and 71 in group B). The mean filter indwelling time in the study group was 25 (range, 3-122) days. In group A the mean indwelling time was 18 (range, 7-55) days and in group B 31 days (range, 8-122). There were 11 retrieval failures: 4 for inability to engage the filter hook and 7 for inability to sheathe the filter due to intimal overgrowth. The mean indwelling time of group A retrieval failures was 16 (range, 15-18) days and in group B 54 (range, 17-122) days. Mean fluoroscopy time for successful retrieval was 3.5 (range, 1-16.6) min and for retrieval failures 25.2 (range, 7.2-62) min. Attempts to retrieve the Optease filter can be performed up to 60 days, but more failures will be encountered with this approach.
NASA Astrophysics Data System (ADS)
Mattern, Jann Paul; Dowd, Michael; Fennel, Katja
2013-05-01
We assimilate satellite observations of surface chlorophyll into a three-dimensional biological ocean model in order to improve its state estimates using a particle filter referred to as sequential importance resampling (SIR). Particle Filters represent an alternative to other, more commonly used ensemble-based state estimation techniques like the ensemble Kalman filter (EnKF). Unlike the EnKF, Particle Filters do not require normality assumptions about the model error structure and are thus suitable for highly nonlinear applications. However, their application in oceanographic contexts is typically hampered by the high dimensionality of the model's state space. We apply SIR to a high-dimensional model with a small ensemble size (20) and modify the standard SIR procedure to avoid complications posed by the high dimensionality of the model state. Two extensions to the SIR include a simple smoother to deal with outliers in the observations, and state-augmentation which provides the SIR with parameter memory. Our goal is to test the feasibility of biological state estimation with SIR for realistic models. For this purpose we compare the SIR results to a model simulation with optimal parameters with respect to the same set of observations. By running replicates of our main experiments, we assess the robustness of our SIR implementation. We show that SIR is suitable for satellite data assimilation into biological models and that both extensions, the smoother and state-augmentation, are required for robust results and improved fit to the observations.
Optimization of Al Matrix Reinforced with B4C Particles
NASA Astrophysics Data System (ADS)
Shabani, Mohsen Ostad; Mazahery, Ali
2013-02-01
In the current study, abrasive wear resistance and mechanical properties of A356 composite reinforced with B4C particulates were investigated. A center particle swarm optimization algorithm (CenterPSO) is proposed to predict the optimal process conditions in fabrication of aluminum matrix composites. Unlike other ordinary particles, the center particle has no explicit velocity and is set to the center of the swarm at every iteration. Other aspects of the center particle are the same as that of the ordinary particle, such as fitness evaluation and competition for the best particle of the swarm. Because the center of the swarm is a promising position, the center particle generally gets good fitness value. More importantly, due to frequent appearance as the best particle of swarm, it often attracts other particles and guides the search direction of the whole swarm.
Optimized digital filtering techniques for radiation detection with HPGe detectors
NASA Astrophysics Data System (ADS)
Salathe, Marco; Kihm, Thomas
2016-02-01
This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures Î³-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.
Mukhopadhyay, Somparna; Hazra, Lakshminarayan
2015-11-01
Resolution capability of an optical imaging system can be enhanced by reducing the width of the central lobe of the point spread function. Attempts to achieve the same by pupil plane filtering give rise to a concomitant increase in sidelobe intensity. The mutual exclusivity between these two objectives may be considered as a multiobjective optimization problem that does not have a unique solution; rather, a class of trade-off solutions called Pareto optimal solutions may be generated. Pareto fronts in the synthesis of lossless phase-only pupil plane filters to achieve superresolution with prespecified lower limits for the Strehl ratio are explored by using the particle swarm optimization technique. PMID:26560575
Research on a Lamb Wave and Particle Filter-Based On-Line Crack Propagation Prognosis Method.
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Cai, Jian; Yang, Weibo
2016-01-01
Prognostics and health management techniques have drawn widespread attention due to their ability to facilitate maintenance activities based on need. On-line prognosis of fatigue crack propagation can offer information for optimizing operation and maintenance strategies in real-time. This paper proposes a Lamb wave-particle filter (LW-PF)-based method for on-line prognosis of fatigue crack propagation which takes advantages of the possibility of on-line monitoring to evaluate the actual crack length and uses a particle filter to deal with the crack evolution and monitoring uncertainties. The piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The state space model relating to crack propagation is established by the data-driven and finite element methods. Fatigue experiments performed on hole-edge crack specimens have validated the advantages of the proposed method. PMID:26950130
NASA Astrophysics Data System (ADS)
Shmaliy, Yuriy S.; Ibarra-Manzano, Oscar
2012-12-01
We address p-shift finite impulse response optimal (OFIR) and unbiased (UFIR) algorithms for predictive filtering ( p > 0), filtering ( p = 0), and smoothing filtering ( p < 0) at a discrete point n over N neighboring points. The algorithms were designed for linear time-invariant state-space signal models with white Gaussian noise. The OFIR filter self-determines the initial mean square state function by solving the discrete algebraic Riccati equation. The UFIR one represented both in the batch and iterative Kalman-like forms does not require the noise covariances and initial errors. An example of applications is given for smoothing and predictive filtering of a two-state polynomial model. Based upon this example, we show that exact optimality is redundant when N ? 1 and still a nice suboptimal estimate can fairly be provided with a UFIR filter at a much lower cost.
INS/GPS Tightly-coupled Integration using Adaptive Unscented Particle Filter
NASA Astrophysics Data System (ADS)
Zhou, Junchuan; Knedlik, Stefan; Loffeld, Otmar
With the rapid developments in computer technology, the particle filter (PF) is becoming more attractive in navigation applications. However, its large computational burden still limits its widespread use. One approach for reducing the computational burden without degrading the system estimation accuracy is to combine the PF with other filters, i.e., the extended Kalman filter (EKF) or the unscented Kalman filter (UKF). In this paper, the a posteriori estimates from an adaptive unscented Kalman filter (AUKF) are used to specify the PF importance density function for generating particles. Unlike the sequential importance sampling re-sampling (SISR) PF, the re-sampling step is not required in the algorithm, because the filter does not reuse the particles. Hence, the filter computational complexity can be reduced. Besides, the latest measurements are used to improve the proposal distribution for generating particles more intelligently. Simulations are conducted on the basis of a field-collected 3D UAV trajectory. GPS and IMU data are simulated under the assumption that a NovAtel DL-4plus GPS receiver and a Landmark™ 20 MEMS-based IMU are used. Navigation under benign and highly reflective signal environments are considered. Monte Carlo experiments are made. Numerical results show that the AUPF with 100 particles can present improved system estimation accuracy with an affordable computational burden when compared with the AEKF and AUKF algorithms.
Multisensor fusion for 3D target tracking using track-before-detect particle filter
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2015-05-01
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
Optimal Filter Estimation for Lucas-Kanade Optical Flow
Sharmin, Nusrat; Brad, Remus
2012-01-01
Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.
Distributed Adaptive Particle Swarm Optimizer in Dynamic Environment
Cui, Xiaohui; Potok, Thomas E
2007-01-01
In the real world, we have to frequently deal with searching and tracking an optimal solution in a dynamical and noisy environment. This demands that the algorithm not only find the optimal solution but also track the trajectory of the changing solution. Particle Swarm Optimization (PSO) is a population-based stochastic optimization technique, which can find an optimal, or near optimal, solution to a numerical and qualitative problem. In PSO algorithm, the problem solution emerges from the interactions between many simple individual agents called particles, which make PSO an inherently distributed algorithm. However, the traditional PSO algorithm lacks the ability to track the optimal solution in a dynamic and noisy environment. In this paper, we present a distributed adaptive PSO (DAPSO) algorithm that can be used for tracking a non-stationary optimal solution in a dynamically changing and noisy environment.
Assessing consumption of bioactive micro-particles by filter-feeding Asian carp
Jensen, Nathan R.; Amberg, Jon J.; Luoma, James A.; Walleser, Liza R.; Gaikowski, Mark P.
2012-01-01
Silver carp Hypophthalmichthys molitrix (SVC) and bighead carp H. nobilis (BHC) have impacted waters in the US since their escape. Current chemical controls for aquatic nuisance species are non-selective. Development of a bioactive micro-particle that exploits filter-feeding habits of SVC or BHC could result in a new control tool. It is not fully understood if SVC or BHC will consume bioactive micro-particles. Two discrete trials were performed to: 1) evaluate if SVC and BHC consume the candidate micro-particle formulation; 2) determine what size they consume; 3) establish methods to evaluate consumption of filter-feeders for future experiments. Both SVC and BHC were exposed to small (50-100 Î¼m) and large (150-200 Î¼m) micro-particles in two 24-h trials. Particles in water were counted electronically and manually (microscopy). Particles on gill rakers were counted manually and intestinal tracts inspected for the presence of micro-particles. In Trial 1, both manual and electronic count data confirmed reductions of both size particles; SVC appeared to remove more small particles than large; more BHC consumed particles; SVC had fewer overall particles in their gill rakers than BHC. In Trial 2, electronic counts confirmed reductions of both size particles; both SVC and BHC consumed particles, yet more SVC consumed micro-particles compared to BHC. Of the fish that ate micro-particles, SVC consumed more than BHC. It is recommended to use multiple metrics to assess consumption of candidate micro-particles by filter-feeders when attempting to distinguish differential particle consumption. This study has implications for developing micro-particles for species-specific delivery of bioactive controls to help fisheries, provides some methods for further experiments with bioactive micro-particles, and may also have applications in aquaculture.
Particle emission characteristics of filter-equipped vacuum cleaners.
Trakumas, S; Willeke, K; Grinshpun, S A; Reponen, T; Mainelis, G; Friedman, W
2001-01-01
Industrial vacuum cleaners with final high-efficiency particulate air (HEPA) filters traditionally have been used for cleanup operations in which all of the nozzle-entrained dust must be collected with high efficiency, for example, after lead-based paint abatement in homes. In this study household vacuum cleaners ranging from $70 to $650 and an industrial vacuum cleaner costing more than $1400 were evaluated relative to their collection efficiency immediately after installing new primary dust collectors in them. Using newly developed testing technology, some of the low-cost household vacuum cleaners equipped with a final HEPA filter were found to have initial overall filtration efficiencies comparable to those of industrial vacuum cleaners equipped with a final HEPA filter. The household vacuum cleaners equipped with a final HEPA filter efficiently collect about 100% of the dry dust entrained by the nozzle. For extensive cleaning efforts and for vacuum cleaning of wet surfaces, however, industrial vacuum cleaners may have an advantage, including ruggedness and greater loading capacity. The methods and findings of this study are applicable to field evaluations of vacuum cleaners. PMID:11549143
NASA Astrophysics Data System (ADS)
Chen, Sheng-Chieh; Wang, Jing; Fissan, Heinz; Pui, David Y. H.
2013-10-01
Nuclepore filter collection with subsequent electron microscopy analysis for nanoparticles was carried out to examine the feasibility of the method to assess the nanoparticle exposure. The number distribution of nanoparticles collected on the filter surface was counted visually and converted to the distribution in the air using existing filtration models for Nuclepore filters. To search for a proper model, this paper studied the overall penetrations of three different nanoparticles (PSL, Ag and NaCl), covering a wide range of particle sizes (20-800 nm) and densities (1.05-10.5 g cm-3), through Nuclepore filters with two different pore diameters (1 and 3 Î¼m) and different face velocities (2-15 cm s-1). The data were compared with existing particle deposition models and modified models proposed by this study, which delivered different results because of different deposition processes considered. It was found that a parameter associated with flow condition and filter geometry (density of fluid medium, particle density, filtration face velocity, filter porosity and pore diameter) should be taken into account to verify the applicability of the models. The data of the overall penetration were in very good agreement with the properly applied models. A good agreement of filter surface collection between the validated model and the SEM analysis was obtained, indicating a correct nanoparticle number distribution in the air can be converted from the Nuclepore filter surface collection and this method can be applied for nanoparticle exposure assessment.
Backus, Sterling J.; Kapteyn, Henry C.
2007-07-10
A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.
Optimized Loading for Particle-in-cell Gyrokinetic Simulations
J.L.V. Lewandowski
2004-05-13
The problem of particle loading in particle-in-cell gyrokinetic simulations is addressed using a quadratic optimization algorithm. Optimized loading in configuration space dramatically reduces the short wavelength modes in the electrostatic potential that are partly responsible for the non-conservation of total energy; further, the long wavelength modes are resolved with good accuracy. As a result, the conservation of energy for the optimized loading is much better that the conservation of energy for the random loading. The method is valid for any geometry and can be coupled to optimization algorithms in velocity space.
Chaotic particle swarm optimization with mutation for classification.
Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza
2015-01-01
In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937
Chaotic Particle Swarm Optimization with Mutation for Classification
Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza
2015-01-01
In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937
Effect of open channel filter on particle emissions of modern diesel engine.
Heikkilä, Juha; Rönkkö, Topi; Lähde, Tero; Lemmetty, Mikko; Arffman, Anssi; Virtanen, Annele; Keskinen, Jorma; Pirjola, Liisa; Rothe, Dieter
2009-10-01
Particle emissions of modern diesel engines are of a particular interest because of their negative health effects. The special interest is in nanosized solid particles. The effect of an open channel filter on particle emissions of a modern heavy-duty diesel engine (MAN D2066 LF31, model year 2006) was studied. Here, the authors show that the open channel filter made from metal screen efficiently reduced the number of the smallest particles and, notably, the number and mass concentration of soot particles. The filter used in this study reached 78% particle mass reduction over the European Steady Cycle. Considering the size-segregated number concentration reduction, the collection efficiency was over 95% for particles smaller than 10 nm. The diffusion is the dominant collection mechanism in small particle sizes, thus the collection efficiency decreased as particle size increased, attaining 50% at 100 nm. The overall particle number reduction was 66-99%, and for accumulation-mode particles the number concentration reduction was 62-69%, both depending on the engine load. PMID:19842323
NASAL FILTERING OF FINE PARTICLES IN CHILDREN VS. ADULTS
Nasal efficiency for removing fine particles may be affected by developmental changes in nasal structure associated with age. In healthy Caucasian children (age 6-13, n=17) and adults (age 18-28, n=11) we measured the fractional deposition (DF) of fine particles (1 and 2um MMAD)...
Optimization of atomic Faraday filters in the presence of homogeneous line broadening
NASA Astrophysics Data System (ADS)
Zentile, Mark A.; Keaveney, James; Mathew, Renju S.; Whiting, Daniel J.; Adams, Charles S.; Hughes, Ifan G.
2015-09-01
We show that homogeneous line broadening drastically affects the performance of atomic Faraday filters. We study the effects of cell length and find that the behaviour of ‘line-centre’ filters are quite different from ‘wing-type’ filters, where the effect of self-broadening is found to be particularly important. We use a computer optimization algorithm to find the best magnetic field and temperature for Faraday filters with a range of cell lengths, and experimentally realize one particular example using a micro-fabricated 87Rb vapour cell. We find excellent agreement between our theoretical model and experimental data.
Leach, R.R.; Schultz, C.; Dowla, F.
1997-07-15
Development of a worldwide network to monitor seismic activity requires deployment of seismic sensors in areas which have not been well studied or may have from available recordings. Development and testing of detection and discrimination algorithms requires a robust representative set of calibrated seismic events for a given region. Utilizing events with poor signal-to-noise (SNR) can add significant numbers to usable data sets, but these events must first be adequately filtered. Source and path effects can make this a difficult task as filtering demands are highly varied as a function of distance, event magnitude, bearing, depth etc. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. In addition, filter parameters are often overly generalized or contain complicated switching. We have developed a method to provide an optimized filter for any regional or teleseismically recorded event. Recorded seismic signals contain arrival energy which is localized in frequency and time. Localized temporal signals whose frequency content is different from the frequency content of the pre-arrival record are identified using rms power measurements. The method is based on the decomposition of a time series into a set of time series signals or scales. Each scale represents a time-frequency band with a constant Q. SNR is calculated for a pre-event noise window and for a window estimated to contain the arrival. Scales with high SNR are used to indicate the band pass limits for the optimized filter.The results offer a significant improvement in SNR particularly for low SNR events. Our method provides a straightforward, optimized filter which can be immediately applied to unknown regions as knowledge of the geophysical characteristics is not required. The filtered signals can be used to map the seismic frequency response of a region and may provide improvements in travel-time picking, bearing estimation regional characterization, and event detection. Results are shown for a set of low SNR events as well as 92 regional and teleseismic events in the Middle East.
Capellari, Giovanni; Eftekhar Azam, Saeed; Mariani, Stefano
2015-01-01
Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615
Capellari, Giovanni; Eftekhar Azam, Saeed; Mariani, Stefano
2015-01-01
Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615
AIR FILTER PARTICLE-SIZE EFFICIENCY TESTING FOR DIAMETERS GREATER THAN 1UM
The paper discusses tests of air filter particle-size efficiency for diameters greater than 1 micrometer. valuation of air cleaner efficiencies in this size range can be quite demanding, depending on the required accuracy. uch particles have sufficient mass to require considerati...
NASA Astrophysics Data System (ADS)
Perera, T. A.; Wilson, G. W.; Scott, K. S.; Austermann, J. E.; Schaar, J. R.; Mancera, A.
2013-07-01
A new technique for reliably identifying point sources in millimeter/submillimeter wavelength maps is presented. This method accounts for the frequency dependence of noise in the Fourier domain as well as nonuniformities in the coverage of a field. This optimal filter is an improvement over commonly-used matched filters that ignore coverage gradients. Treating noise variations in the Fourier domain as well as map space is traditionally viewed as a computationally intensive problem. We show that the penalty incurred in terms of computing time is quite small due to casting many of the calculations in terms of FFTs and exploiting the absence of sharp features in the noise spectra of observations. Practical aspects of implementing the optimal filter are presented in the context of data from the AzTEC bolometer camera. The advantages of using the new filter over the standard matched filter are also addressed in terms of a typical AzTEC map.
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1993-01-01
Minimizing a Euclidean distance in the complex plane optimizes a wide class of correlation metrics for filters implemented on realistic devices. The algorithm searches over no more than two real scalars (gain and phase). It unifies a variety of previous solutions for special cases (e.g., a maximum signal-to-noise ratio with colored noise and a real filter and a maximum correlation intensity with no noise and a coupled filter). It extends optimal partial information filter theory to arbitrary spatial light modulators (fully complex, coupled, discrete, finite contrast ratio, and so forth), additive input noise (white or colored), spatially nonuniform filter modulators, and additive correlation detection noise (including signal dependent noise).
Mode Converter Synthesis by the Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bogdashov, Alexandr A.; Rodin, Yury V.
2007-08-01
Particle Swarm Optimization (PSO) is an effective, simple and promising method intended for the fast search in multi-dimensional space [Kennedy and Eberhart, "Particle Swarm Optimization", Proc. of the 1995 IEEE International Conference on Neural Networks, 1995]. Besides special testing problems a number of engineering tasks of electrodynamics were solved by the PSO successfully [Robinson and Rahmat-Samii, "Particle Swarm Optimization in Electromagnetics", IEEE Trans. Antennas Propag., 2004; Jin and Rahmat-Samii, "Parallel Particle Swarm Optimization and Finite-Difference Time-Domain (PSO/FDTD) Algorithm for Multband and Wide-Band Patch Antenna Designs", IEEE Trans. Antennas Propag., 2005]. On the other hand, the scattering matrix technique is a fast and accurate method of mode converter analysis. We illustrate PSO by a number of converter designs developed for high-power microwaves control: a matching horn for output maser section, a corrugated converter of linear-polarized hybrid modes, a TE01 mitre bend.
NASA Astrophysics Data System (ADS)
Serrano Trujillo, Alejandra; Díaz Ramírez, Víctor H.; Trujillo, Leonardo
2013-09-01
Correlation filters for object recognition represent an attractive alternative to feature based methods. These filters are usually synthesized as a combination of several training templates. These templates are commonly chosen in an ad-hoc manner by the designer, therefore, there is no guarantee that the best set of templates is chosen. In this work, we propose a new approach for the design of composite correlation filters using a multi-objective evolutionary algorithm in conjunction with a variable length coding technique. Given a vast search space of feasible templates, the algorithm finds a subset that allows the construction of a filter with an optimized performance in terms of several performance metrics. The resultant filter is capable of recognizing geometrically distorted versions of a target in high cluttering and noisy conditions. Computer simulation results obtained with the proposed approach are presented and discussed in terms of several performance metrics. These results are also compared to those obtained with existing correlation filters.
Filella, Montserrat; Rellstab, Christian; Chanudet, Vincent; Spaak, Piet
2008-04-01
To quantify the effect of the filter feeder Daphnia on the aggregation of mineral particles, temporal changes in the particle size distribution of inorganic colloids were experimentally determined both in the presence and in the absence of Daphnia in water samples of Lake Brienz, Switzerland, an oligotrophic lake rich in suspended inorganic colloids. The results obtained show that daphnids favour the aggregation of mineral colloids, but only for particle sizes above the Daphnia filter mesh size. However, the number concentration of particles smaller than the Daphnia filter mesh size simultaneously increases in the presence of the filter feeder, suggesting either the break-down of existing aggregates or the aggregation of particles with initial sizes below the measured size range. The density of daphnids in this lake is currently too low to have any significant effect on the fate of inorganic colloidal particles as compared with aggregation due to physical processes of particle collision. However, in more productive water bodies where Daphnia is more abundant, they may play a significant role. PMID:18155744
Sun, W Y
1993-04-01
This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.
Particle separation and collection using an optical chromatographic filter
NASA Astrophysics Data System (ADS)
Hart, Sean J.; Terray, Alex V.; Arnold, Jonathan
2007-10-01
An optofluidic design has been used to completely separate and collect fractions of an injected mixture of colloidal particles. A three-dimensional glass microfluidic device was constructed such that the fluid was directed though a 50-?m-diameter channel. A laser was introduced opposite the flow and its spot size adjusted to completely fill the channel. Thus, for a given laser power and flow rate, certain particles are completely retained while others pass through unhindered. Separation efficiencies in excess of 99% have been attained for a mixture of polymer and silica beads.
Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.
2011-01-01
An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm?1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm?1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445
Optimization of magnetic switches for single particle and cell transport
NASA Astrophysics Data System (ADS)
Abedini-Nassab, Roozbeh; Murdoch, David M.; Kim, CheolGi; Yellen, Benjamin B.
2014-06-01
The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.
Optimization of magnetic switches for single particle and cell transport
Abedini-Nassab, Roozbeh; Yellen, Benjamin B.; Murdoch, David M.; Kim, CheolGi
2014-06-28
The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.
Parameter Selection and Performance Study in Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bhattacharya, Indrajit; Samanta, Shukla
2010-10-01
The present paper describes the Particle Swarm Optimization (PSO) technique and how different parameters in the algorithm may be selected in order to achieve faster convergence to the solution for a given optimization problem. PSO has become a common heuristic technique in the optimization community with many researchers exploring the concepts, issues and applications of the algorithm. PSO has undergone many changes since its introduction in 1995. As researchers have learnt about the technique, they have derived new versions, new applications and published theoretical studies of the effects of the various parameters and aspects of the algorithm. This paper comprises a snapshot of the particle swarming, including variations in the algorithm, current and ongoing research, and applications. In this paper we first analyze the impact that the inertia weight and maximum velocity have on the performance of the particle swarm optimizer, and then provide guidelines for selecting these two parameters.
Filter performance of n99 and n95 facepiece respirators against viruses and ultrafine particles.
Eninger, Robert M; Honda, Takeshi; Adhikari, Atin; Heinonen-Tanski, Helvi; Reponen, Tiina; Grinshpun, Sergey A
2008-07-01
The performance of three filtering facepiece respirators (two models of N99 and one N95) challenged with an inert aerosol (NaCl) and three virus aerosols (enterobacteriophages MS2 and T4 and Bacillus subtilis phage)-all with significant ultrafine components-was examined using a manikin-based protocol with respirators sealed on manikins. Three inhalation flow rates, 30, 85, and 150 l min(-1), were tested. The filter penetration and the quality factor were determined. Between-respirator and within-respirator comparisons of penetration values were performed. At the most penetrating particle size (MPPS), >3% of MS2 virions penetrated through filters of both N99 models at an inhalation flow rate of 85 l min(-1). Inhalation airflow had a significant effect upon particle penetration through the tested respirator filters. The filter quality factor was found suitable for making relative performance comparisons. The MPPS for challenge aerosols was <0.1 mum in electrical mobility diameter for all tested respirators. Mean particle penetration (by count) was significantly increased when the size fraction of <0.1 mum was included as compared to particles >0.1 mum. The filtration performance of the N95 respirator approached that of the two models of N99 over the range of particle sizes tested ( approximately 0.02 to 0.5 mum). Filter penetration of the tested biological aerosols did not exceed that of inert NaCl aerosol. The results suggest that inert NaCl aerosols may generally be appropriate for modeling filter penetration of similarly sized virions. PMID:18477653
Absorption Edge X-Ray Tomography for the Analysis of Particle Deposition in Packed Bed Filters
NASA Astrophysics Data System (ADS)
Waske, A.; Heiland, M.; Beckmann, F.; Odenbach, S.
2010-05-01
A filtration experiment was conducted to explore the interplay between local porosity of the filter bed and particle deposition. Silver-coated polystyrene colloidal particles of 10 micrometer diameter were filtered in a model porous medium consisting of glass spheres and imaged using synchrotron-based X-ray computed tomography. To enhance the image contrast of the colloid depositions, differential tomography was applied using monochromatic X-rays with energies just below and above the K? absorption edge of silver. We will present the experimental setup and the image processing steps taken. As a result we show particle deposition and local filter bed porosity as a function of radius. Our results show strong correlation of radial porosity distribution and radial colloid deposition in porous media.
Khan, T.; Ramuhalli, Pradeep; Dass, Sarat
2011-06-30
Flaw profile characterization from NDE measurements is a typical inverse problem. A novel transformation of this inverse problem into a tracking problem, and subsequent application of a sequential Monte Carlo method called particle filtering, has been proposed by the authors in an earlier publication [1]. In this study, the problem of flaw characterization from multi-sensor data is considered. The NDE inverse problem is posed as a statistical inverse problem and particle filtering is modified to handle data from multiple measurement modes. The measurement modes are assumed to be independent of each other with principal component analysis (PCA) used to legitimize the assumption of independence. The proposed particle filter based data fusion algorithm is applied to experimental NDE data to investigate its feasibility.
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2009-04-01
Sequential Monte Carlo (SMC) approaches are increasingly being used in watershed hydrology to approximate the evolving posterior distribution of model parameters and states when new streamflow or other data are becoming available. The typical implementation of SMC requires the use of a set of particles to represent the posterior probability density function (pdf) of model parameters and states. These particles are propagated forward in time and/or space using the (nonlinear) model operator and updated when new observational data become available. Main difficulty in applying particle filters in practice is problems with ensemble degeneracy, in which an increasing number of particles is exploring unproductive parts of the posterior pdf and assigned a negligible weight. To ensure sufficient particle diversity at every stage during the simulation, I will present an efficient SMC scheme that combines particle filtering with importance resampling and DiffeRential Evolution Adaptive Metropolis (DREAM) sampling. Our method is based on the DREAM adaptive MCMC scheme presented in Vrugt et al. (2009), but implemented sequentially to facilitate posterior tracking of model parameters and states. Initial results using the Sacramento Soil Moisture Accounting (SAC-SMA) model have shown that our DREAM particle filter has the advantage of requiring far fewer particles than conventional SMC approaches. This significantly speeds up convergence to the evolving limiting distribution, and allows parameter and state inference in spatially distributed hydrologic models.
NASA Astrophysics Data System (ADS)
Raitoharju, Matti; Nurminen, Henri; Piché, Robert
2015-12-01
Indoor positioning based on wireless local area network (WLAN) signals is often enhanced using pedestrian dead reckoning (PDR) based on an inertial measurement unit. The state evolution model in PDR is usually nonlinear. We present a new linear state evolution model for PDR. In simulated-data and real-data tests of tightly coupled WLAN-PDR positioning, the positioning accuracy with this linear model is better than with the traditional models when the initial heading is not known, which is a common situation. The proposed method is computationally light and is also suitable for smoothing. Furthermore, we present modifications to WLAN positioning based on Gaussian coverage areas and show how a Kalman filter using the proposed model can be used for integrity monitoring and (re)initialization of a particle filter.
An improved particle swarm optimization algorithm for reliability problems.
Wu, Peifeng; Gao, Liqun; Zou, Dexuan; Li, Steven
2011-01-01
An improved particle swarm optimization (IPSO) algorithm is proposed to solve reliability problems in this paper. The IPSO designs two position updating strategies: In the early iterations, each particle flies and searches according to its own best experience with a large probability; in the late iterations, each particle flies and searches according to the fling experience of the most successful particle with a large probability. In addition, the IPSO introduces a mutation operator after position updating, which can not only prevent the IPSO from trapping into the local optimum, but also enhances its space developing ability. Experimental results show that the proposed algorithm has stronger convergence and stability than the other four particle swarm optimization algorithms on solving reliability problems, and that the solutions obtained by the IPSO are better than the previously reported best-known solutions in the recent literature. PMID:20850737
NASA Astrophysics Data System (ADS)
Shishkovsky, I.; Sherbakov, V.; Pitrov, A.
2007-06-01
The main goal of the work was optimization of the phase and porous fine structures of filter elements and subsequent laser synthesis by the method layer-by-layer Selective Laser Sintering (SLS) of functional devices, exploration of their properties and requirements of synthesis. Common methodical approaches are developed by the searching optimal requirements of layer-by-layer synthesis usable to different powder compositions and concrete guidelines (conditions of sintering, powder composition, etc.) for SLS of filter elements (including anisotropic) from metal-polymer powder mixture - brass + polycarbonate{PC} = 6:1. As a result of numerical simulations it designed an original graph - numerical procedure and represented a computer program for definition of flow filter performances, as homogeneous (isotropic) as heterogeneous (anisotropic), having the cylindrical shape. Calculation of flow behavior for anisotropic filter elements allows predicting their future applications and managing its.
On the application of optimal wavelet filter banks for ECG signal classification
NASA Astrophysics Data System (ADS)
Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvão, R. K. H.
2014-03-01
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.
Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications
Moccia, Antonio
2014-01-01
Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
Optimal implementation approach for discrete wavelet transform using FIR filter banks on FPGAs
NASA Astrophysics Data System (ADS)
Sargunaraj, Joe J.; Rao, Sathyanarayana S.
1998-10-01
We present a wavelet transform implementation approach using a FIR filter bank that uses a Wallace Tree structure for fast multiplication. VHDL models targeted specifically for synthesize have been written for clocked data registers, adders and the multiplier. Symmetric wavelets like Biorthogonal wavelets can be implemented using this design. By changing the input filter coefficients different wavelet decompositions may be implemented. The design is mapped onto the ORCA series FPGA after synthesis and optimization for timing and area.
Optimizing the Choice of Filter Sets for Space Based Imaging Instruments
NASA Astrophysics Data System (ADS)
Elliott, Rachel E.; Farrah, Duncan; Petty, Sara M.; Harris, Kathryn Amy
2015-01-01
We investigate the challenge of selecting a limited number of filters for space based imaging instruments such that they are able to address multiple heterogeneous science goals. The number of available filter slots for a mission is bounded by factors such as instrument size and cost. We explore methods used to extract the optimal group of filters such that they complement each other most effectively. We focus on three approaches; maximizing the separation of objects in two-dimensional color planes, SED fitting to select those filter sets that give the finest resolution in fitted physical parameters, and maximizing the orthogonality of physical parameter vectors in N-dimensional color-color space. These techniques are applied to a test-case, a UV/optical imager with space for five filters, with the goal of measuring the properties of local stars through to distant galaxies.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah
2015-01-01
The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approachesâ€”Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims. PMID:25978493
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Design, optimization and fabrication of an optical mode filter for integrated optics.
Magnin, Vincent; Zegaoui, Malek; Harari, Joseph; François, Marc; Decoster, Didier
2009-04-27
We present the design, optimization, fabrication and characterization of an optical mode filter, which attenuates the snaking behavior of light caused by a lateral misalignment of the input optical fiber relative to an optical circuit. The mode filter is realized as a bottleneck section inserted in an optical waveguide in front of a branching element. It is designed with Bézier curves. Its effect, which depends on the optical state of polarization, is experimentally demonstrated by investigating the equilibrium of an optical splitter, which is greatly improved however only in TM mode. The measured optical losses induced by the filter are 0.28 dB. PMID:19399117
Design and optimization of high reflectance graded index optical filter with quintic apodization
NASA Astrophysics Data System (ADS)
Praveen Kumar, Vemuri S. R. S.; Sunita, Parinam; Kumar, Mukesh; Rao, Parinam Krishna; Kumari, Neelam; Karar, Vinod; Sharma, Amit L.
2015-06-01
Rugate filters are a special kind of graded-index films that may provide advantages in both, optical performance and mechanical properties of the optical coatings. In this work, design and optimization of a high reflection rugate filter having reflection peak at 540nm has been presented which has been further optimized for side-lobe suppression. A suitable number of apodization and matching layers, generated through Quintic function, were added to the basic sinusoidal refractive index profile to achieve high reflectance of around 80% in the rejection window for normal incidence. Smaller index contrast between successive layers in the present design leads to less residual stress in the thinfilm stack which enhances the adhesion and mechanical strength of the filter. The optimized results show excellent side lobe suppression achieved around the stopband.
Ares-I Bending Filter Design using a Constrained Optimization Approach
NASA Technical Reports Server (NTRS)
Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth
2008-01-01
The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.
Removal of Particles and Acid Gases (SO2 or HCl) with a Ceramic Filter by Addition of Dry Sorbents
Hemmer, G.; Kasper, G.; Wang, J.; Schaub, G.
2002-09-20
The present investigation intends to add to the fundamental process design know-how for dry flue gas cleaning, especially with respect to process flexibility, in cases where variations in the type of fuel and thus in concentration of contaminants in the flue gas require optimization of operating conditions. In particular, temperature effects of the physical and chemical processes occurring simultaneously in the gas-particle dispersion and in the filter cake/filter medium are investigated in order to improve the predictive capabilities for identifying optimum operating conditions. Sodium bicarbonate (NaHCO{sub 3}) and calcium hydroxide (Ca(OH){sub 2}) are known as efficient sorbents for neutralizing acid flue gas components such as HCl, HF, and SO{sub 2}. According to their physical properties (e.g. porosity, pore size) and chemical behavior (e.g. thermal decomposition, reactivity for gas-solid reactions), optimum conditions for their application vary widely. The results presented concentrate on the development of quantitative data for filtration stability and overall removal efficiency as affected by operating temperature. Experiments were performed in a small pilot unit with a ceramic filter disk of the type Dia-Schumalith 10-20 (Fig. 1, described in more detail in Hemmer 2002 and Hemmer et al. 1999), using model flue gases containing SO{sub 2} and HCl, flyash from wood bark combustion, and NaHCO{sub 3} as well as Ca(OH){sub 2} as sorbent material (particle size d{sub 50}/d{sub 84} : 35/192 {micro}m, and 3.5/16, respectively). The pilot unit consists of an entrained flow reactor (gas duct) representing the raw gas volume of a filter house and the filter disk with a filter cake, operating continuously, simulating filter cake build-up and cleaning of the filter medium by jet pulse. Temperatures varied from 200 to 600 C, sorbent stoichiometric ratios from zero to 2, inlet concentrations were on the order of 500 to 700 mg/m{sup 3}, water vapor contents ranged from zero to 20 vol%. The experimental program with NaHCO{sub 3} is listed in Table 1. In addition, model calculations were carried out based on own and published experimental results that estimate residence time and temperature effects on removal efficiencies.
Removal of virus to protozoan sized particles in point-of-use ceramic water filters.
Bielefeldt, Angela R; Kowalski, Kate; Schilling, Cherylynn; Schreier, Simon; Kohler, Amanda; Scott Summers, R
2010-03-01
The particle removal performance of point-of-use ceramic water filters (CWFs) was characterized in the size range of 0.02-100 microm using carboxylate-coated polystyrene fluorescent microspheres, natural particles and clay. Particles were spiked into dechlorinated tap water, and three successive water batches treated in each of six different CWFs. Particle removal generally increased with increasing size. The removal of virus-sized 0.02 and 0.1 microm spheres were highly variable between the six filters, ranging from 63 to 99.6%. For the 0.5 microm spheres removal was less variable and in the range of 95.1-99.6%, while for the 1, 2, 4.5, and 10 microm spheres removal was >99.6%. Recoating four of the CWFs with colloidal silver solution improved removal of the 0.02 microm spheres, but had no significant effects on the other particle sizes. Log removals of 1.8-3.2 were found for natural turbidity and spiked kaolin clay particles; however, particles as large as 95 microm were detected in filtered water. PMID:19926110
Particle Clogging in Filter Media of Embankment Dams: A Numerical and Experimental Study
NASA Astrophysics Data System (ADS)
Antoun, T.; Kanarska, Y.; Ezzedine, S. M.; Lomov, I.; Glascoe, L. G.; Smith, J.; Hall, R. L.; Woodson, S. C.
2013-12-01
The safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique which enforces the correct in-domain computational boundary conditions inside and on the boundary of the particles. The numerical code is validated to experiments conducted at the US Army Corps of Engineering and Research Development Center (ERDC). These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security (DHS), Science and Technology Directorate, Homeland Security Advanced Research Projects Agency (HSARPA).
Epileptic Seizure Prediction by a System of Particle Filter Associated with a Neural Network
NASA Astrophysics Data System (ADS)
Liu, Derong; Pang, Zhongyu; Wang, Zhuo
2009-12-01
None of the current epileptic seizure prediction methods can widely be accepted, due to their poor consistency in performance. In this work, we have developed a novel approach to analyze intracranial EEG data. The energy of the frequency band of 4-12 Hz is obtained by wavelet transform. A dynamic model is introduced to describe the process and a hidden variable is included. The hidden variable can be considered as indicator of seizure activities. The method of particle filter associated with a neural network is used to calculate the hidden variable. Six patients' intracranial EEG data are used to test our algorithm including 39 hours of ictal EEG with 22 seizures and 70 hours of normal EEG recordings. The minimum least square error algorithm is applied to determine optimal parameters in the model adaptively. The results show that our algorithm can successfully predict 15 out of 16 seizures and the average prediction time is 38.5 minutes before seizure onset. The sensitivity is about 93.75% and the specificity (false prediction rate) is approximately 0.09 FP/h. A random predictor is used to calculate the sensitivity under significance level of 5%. Compared to the random predictor, our method achieved much better performance.
Blended particle methods with adaptive subspaces for filtering turbulent dynamical systems
NASA Astrophysics Data System (ADS)
Qi, Di; Majda, Andrew J.
2015-04-01
It is a major challenge throughout science and engineering to improve uncertain model predictions by utilizing noisy data sets from nature. Hybrid methods combining the advantages of traditional particle filters and the Kalman filter offer a promising direction for filtering or data assimilation in high dimensional turbulent dynamical systems. In this paper, blended particle filtering methods that exploit the physical structure of turbulent dynamical systems are developed. Non-Gaussian features of the dynamical system are captured adaptively in an evolving-in-time low dimensional subspace through particle methods, while at the same time statistics in the remaining portion of the phase space are amended by conditional Gaussian mixtures interacting with the particles. The importance of both using the adaptively evolving subspace and introducing conditional Gaussian statistics in the orthogonal part is illustrated here by simple examples. For practical implementation of the algorithms, finding the most probable distributions that characterize the statistics in the phase space as well as effective resampling strategies is discussed to handle realizability and stability issues. To test the performance of the blended algorithms, the forty dimensional Lorenz 96 system is utilized with a five dimensional subspace to run particles. The filters are tested extensively in various turbulent regimes with distinct statistics and with changing observation time frequency and both dense and sparse spatial observations. In real applications perfect dynamical models are always inaccessible considering the complexities in both modeling and computation of high dimensional turbulent system. The effects of model errors from imperfect modeling of the systems are also checked for these methods. The blended methods show uniformly high skill in both capturing non-Gaussian statistics and achieving accurate filtering results in various dynamical regimes with and without model errors.
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
Optimized design of N optical filters for color and polarization imaging.
Tu, Xingzhou; Pau, Stanley
2016-02-01
Designs of N optical filters for color and polarization imaging are found by minimizing detector noise, photon shot noise, and interpolation error for the image acquisition in a division of focal plane configuration. To minimize interpolation error, a general tiling procedure and an optimized tiling pattern for N filters are presented. For multispectral imaging, a general technique to find the transmission band is presented. For full Stokes polarization imaging, the general design with optimized retardances and fast angles of the polarizers is compared with the solution of the Thomson problem. These results are applied to the design of a three-color full Stokes imaging camera. PMID:26906867
Cardiac fiber tracking using adaptive particle filtering based on tensor rotation invariant in MRI
NASA Astrophysics Data System (ADS)
Kong, Fanhui; Liu, Wanyu; Magnin, Isabelle E.; Zhu, Yuemin
2016-03-01
Diffusion magnetic resonance imaging (dMRI) is a non-invasive method currently available for cardiac fiber tracking. However, accurate and efficient cardiac fiber tracking is still a challenge. This paper presents a probabilistic cardiac fiber tracking method based on particle filtering. In this framework, an adaptive sampling technique is presented to describe the posterior distribution of fiber orientations by adjusting the number and status of particles according to the fractional anisotropy of diffusion. An observation model is then proposed to update the weight of particles by rotating diffusion tensor from the primary eigenvector to a given fiber orientation while keeping the shape of the tensor invariant. The results on human cardiac dMRI show that the proposed method is robust to noise and outperforms conventional streamline and particle filtering techniques.
Cardiac fiber tracking using adaptive particle filtering based on tensor rotation invariant in MRI.
Kong, Fanhui; Liu, Wanyu; Magnin, Isabelle E; Zhu, Yuemin
2016-03-01
Diffusion magnetic resonance imaging (dMRI) is a non-invasive method currently available for cardiac fiber tracking. However, accurate and efficient cardiac fiber tracking is still a challenge. This paper presents a probabilistic cardiac fiber tracking method based on particle filtering. In this framework, an adaptive sampling technique is presented to describe the posterior distribution of fiber orientations by adjusting the number and status of particles according to the fractional anisotropy of diffusion. An observation model is then proposed to update the weight of particles by rotating diffusion tensor from the primary eigenvector to a given fiber orientation while keeping the shape of the tensor invariant. The results on human cardiac dMRI show that the proposed method is robust to noise and outperforms conventional streamline and particle filtering techniques. PMID:26864039
Boundary filters for vector particles passing parity breaking domains
Kolevatov, S. S.; Andrianov, A. A.
2014-07-23
The electrodynamics supplemented with a Lorenz and CPT invariance violating Chern-Simons (CS) action (Carrol-Field-Jackiw electrodynamics) is studied when the parity-odd medium is bounded by a hyperplane separating it from the vacuum. The solutions in both half-spaces are carefully discussed and for space-like boundary stitched on the boundary with help of the Bogolubov transformations. The presence of two different Fock vacua is shown. The passage of photons and massive vector mesons through a boundary between the CS medium and the vacuum of conventional Maxwell electrodynamics is investigated. Effects of reflection from a boundary (up to the total one) are revealed when vector particles escape to vacuum and income from vacuum passing the boundary.
Accelerating Particle Filter Using Randomized Multiscale and Fast Multipole Type Methods.
Shabat, Gil; Shmueli, Yaniv; Bermanis, Amit; Averbuch, Amir
2015-07-01
Particle filter is a powerful tool for state tracking using non-linear observations. We present a multiscale based method that accelerates the tracking computation by particle filters. Unlike the conventional way, which calculates weights over all particles in each cycle of the algorithm, we sample a small subset from the source particles using matrix decomposition methods. Then, we apply a function extension algorithm that uses a particle subset to recover the density function for all the rest of the particles not included in the chosen subset. The computational effort is substantial especially when multiple objects are tracked concurrently. The proposed algorithm significantly reduces the computational load. By using the Fast Gaussian Transform, the complexity of the particle selection step is reduced to a linear time in n and k, where n is the number of particles and k is the number of particles in the selected subset. We demonstrate our method on both simulated and on real data such as object tracking in video sequences. PMID:26352448
NASA Astrophysics Data System (ADS)
Glascoe, L. G.; Ezzedine, S. M.; Kanarska, Y.; Lomov, I. N.; Antoun, T.; Smith, J.; Hall, R.; Woodson, S.
2014-12-01
Understanding the flow of fines, particulate sorting in porous media and fractured media during sediment transport is significant for industrial, environmental, geotechnical and petroleum technologies to name a few. For example, the safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique. The numerical code is validated to experiments conducted at the USACE and ERDC. These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. We have extended these results to more realistic heterogeneous population particulates for sediment transport. This work performed under the auspices of the US DOE by LLNL under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security, Science and Technology Directorate, Homeland Security Advanced Research Projects Agency.
Optimization of high-channel-count fiber Bragg grating filters design with low dispersion
NASA Astrophysics Data System (ADS)
Jiang, Hao; Chen, Jing; Liu, Tundong
2015-02-01
An optimization-based technique for high-channel-count fiber Bragg grating (FBG) filter synthesis is proposed. The approach is based on utilizing a tailored group delay to construct a mathematical optimization model. In the objective function, both the maximum index modulation and the dispersion of FBG must be optimized simultaneously. An effective evolutionary algorithm, the differential evolution (DE) algorithm, is applied to find the optimal group delay parameter. Design examples demonstrate that the proposed approach yields a remarkable reduction in maximum index modulation with low dispersion in each channel.
Genetic algorithm and particle swarm optimization combined with Powell method
NASA Astrophysics Data System (ADS)
Bento, David; Pinho, Diana; Pereira, Ana I.; Lima, Rui
2013-10-01
In recent years, the population algorithms are becoming increasingly robust and easy to use, based on Darwin's Theory of Evolution, perform a search for the best solution around a population that will progress according to several generations. This paper present variants of hybrid genetic algorithm - Genetic Algorithm and a bio-inspired hybrid algorithm - Particle Swarm Optimization, both combined with the local method - Powell Method. The developed methods were tested with twelve test functions from unconstrained optimization context.
Analytical model for particle migration within base soil-filter system
Indraratna, B.; Vafai, F.
1997-02-01
Cracking of impervious dam cores can occur due to differential settlement, construction deficiencies, or hydraulic fracturing. When leakage occurs through a cracked core, leakage channels may erode. The studies referred to in this paper have mostly found that for gradients typical of dams, erosion in cracks or other leakage channels usually occurs quickly and clogs the filter in the area of the crack, which is beneficial in practice. This study highlights a mathematical (analytical) model simulating the filtration phenomenon applicable to a base soil-filter system, incorporating the hydraulic conditions and the relevant material properties such as porosity, density, friction angle, and the shape and distribution of particles. The model is founded on the concept of critical hydraulic gradient derived from limit equilibrium considerations, where the migration of particles is assumed to occur under applied hydraulic gradients exceeding this critical value. The rate of particle erosion, and hence, the filter effectiveness is quantified on the basis of mass and momentum conservation theories. By dividing the base soil and filter domains into discrete elements, the model is capable of predicting the time-dependent particle gradation and permeability of each element, thereby the amount of material eroded from or retained within the system. Laboratory tests conducted on a fine base material verify the validity of the model. The model predictions are also compared with the available empirical recommendations, including the conventional grading ratios.
The effects of particle charge on the performance of a filtering facepiece.
Chen, C C; Huang, S H
1998-04-01
This study quantitatively determined the effect of electrostatic charge on the performance of an electret filtering facepiece. Monodisperse challenge corn oil aerosols with uniform charges were generated using a modified vibrating orifice monodisperse aerosol generator. The aerosol size distributions and concentrations upstream and downstream of an electret filter were measured using an aerodynamic particle sizer, an Aerosizer, and a scanning mobility particle sizer. The aerosol charge was measured by using an aerosol electrometer. The tested electret filter had a packing density of about 0.08, fiber size of 3 microns, and thickness of 0.75 mm. As expected, the primary filtration mechanisms for the micrometer-sized particles are interception and impaction, especially at high face velocities, while electrostatic attraction and diffusion are the filtration mechanisms for submicrometer-sized aerosol particles. The fiber charge density was estimated to be 1.35 x 10(-5) coulomb per square meter. After treatment with isopropanol, most of fiber charges were removed, causing the 0.3-micron aerosol penetration to increase from 36 to 68%. The air resistance of the filter increased slightly after immersion in the isopropanol, probably due to the coating of impurities in isopropanol. The aerosol penetration decreased with increasing aerosol charge. The most penetrating aerosol size became larger as the aerosol charge increased, e.g., from 0.32 to 1.3 microns when the aerosol charge increased from 0 to 500 elementary charges. PMID:9586197
An assessment of particle filtering methods and nudging for climate state reconstructions
NASA Astrophysics Data System (ADS)
Dubinkina, S.; Goosse, H.
2013-05-01
Using the climate model of intermediate complexity LOVECLIM in an idealised framework, we assess three data-assimilation methods for reconstructing the climate state. The methods are a nudging, a particle filter with sequential importance resampling, and a nudging proposal particle filter and the test case corresponds to the climate of the high latitudes of the Southern Hemisphere during the past 150 yr. The data-assimilation methods constrain the model by pseudo-observations of surface air temperature anomalies obtained from the same model, but different initial conditions. All three data-assimilation methods provide with good estimations of surface air temperature and of sea ice concentration, with the nudging proposal particle filter obtaining the highest correlations with the pseudo-observations. When reconstructing variables that are not directly linked to the pseudo-observations such as atmospheric circulation and sea surface salinity, the particle filters have equivalent performance and their correlations are smaller than for surface air temperature reconstructions but still satisfactory for many applications. The nudging, on the contrary, obtains sea surface salinity patterns that are opposite to the pseudo-observations, which is due to a spurious impact of the nudging on vertical exchanges in the ocean.
An assessment of climate state reconstructions obtained using particle filtering methods
NASA Astrophysics Data System (ADS)
Dubinkina, Svetlana; Goosse, Hugues
2013-04-01
In an idealized framework, we assess reconstructions of the climate state of the southern hemisphere during the past 150 years using the climate model of intermediate complexity LOVECLIM and three data-assimilation methods: a nudging, a particle filter with sequential importance resampling, and an extremely efficient particle filter. The methods constrain the model by pseudo-observations of surface air temperature anomalies obtained from a twin experiment using the same model but different initial conditions. The net of the pseudo-observations is chosen to be either dense (when the pseudo-observations are given at every grid cell of the model) or sparse (when the pseudo-observations are given at the same locations as the dataset of instrumental surface temperature records HADCRUT3). All three data-assimilation methods provide with good estimations of surface air temperature and of sea ice concentration, with the extremely efficient particle filter having the best performance. When reconstructing variables that are not directly linked to the pseudo-observations of surface air temperature as atmospheric circulation and sea surface salinity, the performance of the particle filters is weaker but still satisfactory for many applications. Sea surface salinity reconstructed by the nudging, however, exhibits a patterns opposite to the pseudo-observations, which is due to a spurious impact of the nudging on the ocean mixing.
An assessment of climate state reconstructions obtained using particle filtering methods
NASA Astrophysics Data System (ADS)
Dubinkina, S.; Goosse, H.
2013-01-01
In an idealized framework, we assess reconstructions of the climate state of the Southern Hemisphere during the past 150 yr using the climate model of intermediate complexity LOVECLIM and three data-assimilation methods: a nudging, a particle filter with sequential importance resampling, and an extremely efficient particle filter. The methods constrain the model by pseudo-observations of surface air temperature anomalies obtained from a twin experiment using the same model but different initial conditions. The net of the pseudo-observations is chosen to be either dense (when the pseudo-observations are given at every grid cell of the model) or sparse (when the pseudo-observations are given at the same locations as the dataset of instrumental surface temperature records HADCRUT3). All three data-assimilation methods provide with good estimations of surface air temperature and of sea ice concentration, with the extremely efficient particle filter having the best performance. When reconstructing variables that are not directly linked to the pseudo-observations of surface air temperature as atmospheric circulation and sea surface salinity, the performance of the particle filters is weaker but still satisfactory for many applications. Sea surface salinity reconstructed by the nudging, however, exhibits a patterns opposite to the pseudo-observations, which is due to a spurious impact of the nudging on the ocean mixing.
A baker's dozen of new particle flows for nonlinear filters, Bayesian decisions and transport
NASA Astrophysics Data System (ADS)
Daum, Fred; Huang, Jim
2015-05-01
We describe a baker's dozen of new particle flows to compute Bayes' rule for nonlinear filters, Bayesian decisions and learning as well as transport. Several of these new flows were inspired by transport theory, but others were inspired by physics or statistics or Markov chain Monte Carlo methods.
X-RAY FLUORESCENCE ANALYSIS OF FILTER-COLLECTED AEROSOL PARTICLES
X-ray fluorescence (XRF) has become an effective technique for determining the elemental content of aerosol samples. For quantitative analysis, the aerosol particles must be collected as uniform deposits on the surface of Teflon membrane filters. An energy dispersive XRF spectrom...
Arunkumar, R; Hogancamp, Kristina U; Parsons, Michael S; Rogers, Donna M; Norton, Olin P; Nagel, Brian A; Alderman, Steven L; Waggoner, Charles A
2007-08-01
This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30 x 30 x 29 cm(3) nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5 to 12 standard m(3)/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150 degrees C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7 standard m(3)/min, high mass concentrations (approximately 25 mg/m(3)) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160 nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions. PMID:17764353
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
NASA Astrophysics Data System (ADS)
Lim, Wei Jer; Neoh, Siew Chin; Norizan, Mohd Natashah; Mohamad, Ili Salwani
2015-05-01
Optimization for complex circuit design often requires large amount of manpower and computational resources. In order to optimize circuit performance, it is critical not only for circuit designers to adjust the component value but also to fulfill objectives such as gain, cutoff frequency, ripple and etc. This paper proposes Non-dominated Sorting Genetic Algorithm II (NSGA-II) to optimize a ninth order multiple feedback Chebyshev low pass filter. Multi-objective Pareto-Based optimization is involved whereby the research aims to obtain the best trade-off for minimizing the pass-band ripple, maximizing the output gain and achieving the targeted cut-off frequency. The developed NSGA-II algorithm is executed on the NGSPICE circuit simulator to assess the filter performance. Overall results show satisfactory in the achievements of the required design specifications.
Integration of GPS Precise Point Positioning and MEMS-Based INS Using Unscented Particle Filter
Abd Rabbou, Mahmoud; El-Rabbany, Ahmed
2015-01-01
Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available. PMID:25815446
Integration of GPS precise point positioning and MEMS-based INS using unscented particle filter.
Abd Rabbou, Mahmoud; El-Rabbany, Ahmed
2015-01-01
Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available. PMID:25815446
Vasudevan, V.; Kang, B.S-J.; Johnson, E.K.
2002-09-19
Ceramic barrier filtration is a leading technology employed in hot gas filtration. Hot gases loaded with ash particle flow through the ceramic candle filters and deposit ash on their outer surface. The deposited ash is periodically removed using back pulse cleaning jet, known as surface regeneration. The cleaning done by this technique still leaves some residual ash on the filter surface, which over a period of time sinters, forms a solid cake and leads to mechanical failure of the candle filter. A room temperature testing facility (RTTF) was built to gain more insight into the surface regeneration process before testing commenced at high temperature. RTTF was instrumented to obtain pressure histories during the surface regeneration process and a high-resolution high-speed imaging system was integrated in order to obtain pictures of the surface regeneration process. The objective of this research has been to utilize the RTTF to study the surface regeneration process at the convenience of room temperature conditions. The face velocity of the fluidized gas, the regeneration pressure of the back pulse and the time to build up ash on the surface of the candle filter were identified as the important parameters to be studied. Two types of ceramic candle filters were used in the study. Each candle filter was subjected to several cycles of ash build-up followed by a thorough study of the surface regeneration process at different parametric conditions. The pressure histories in the chamber and filter system during build-up and regeneration were then analyzed. The size distribution and movement of the ash particles during the surface regeneration process was studied. Effect of each of the parameters on the performance of the regeneration process is presented. A comparative study between the two candle filters with different characteristics is presented.
NASA Astrophysics Data System (ADS)
Shank, B.; Yen, J. J.; Cabrera, B.; Kreikebaum, J. M.; Moffatt, R.; Redl, P.; Young, B. A.; Brink, P. L.; Cherry, M.; Tomada, A.
2014-11-01
We present a detailed thermal and electrical model of superconducting transition edge sensors (TESs) connected to quasiparticle (qp) traps, such as the W TESs connected to Al qp traps used for CDMS (Cryogenic Dark Matter Search) Ge and Si detectors. We show that this improved model, together with a straightforward time-domain optimal filter, can be used to analyze pulses well into the nonlinear saturation region and reconstruct absorbed energies with optimal energy resolution.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
Sun, Jun; Fang, Wei; Wu, Xiaojun; Palade, Vasile; Xu, Wenbo
2012-01-01
Quantum-behaved particle swarm optimization (QPSO), motivated by concepts from quantum mechanics and particle swarm optimization (PSO), is a probabilistic optimization algorithm belonging to the bare-bones PSO family. Although it has been shown to perform well in finding the optimal solutions for many optimization problems, there has so far been little analysis on how it works in detail. This paper presents a comprehensive analysis of the QPSO algorithm. In the theoretical analysis, we analyze the behavior of a single particle in QPSO in terms of probability measure. Since the particle's behavior is influenced by the contraction-expansion (CE) coefficient, which is the most important parameter of the algorithm, the goal of the theoretical analysis is to find out the upper bound of the CE coefficient, within which the value of the CE coefficient selected can guarantee the convergence or boundedness of the particle's position. In the experimental analysis, the theoretical results are first validated by stochastic simulations for the particle's behavior. Then, based on the derived upper bound of the CE coefficient, we perform empirical studies on a suite of well-known benchmark functions to show how to control and select the value of the CE coefficient, in order to obtain generally good algorithmic performance in real world applications. Finally, a further performance comparison between QPSO and other variants of PSO on the benchmarks is made to show the efficiency of the QPSO algorithm with the proposed parameter control and selection methods. PMID:21905841
Ultrafine particle emission from incinerators: the role of the fabric filter.
Buonanno, G; Scungio, M; Stabile, L; Tirler, W
2012-01-01
Incinerators are claimed to be responsible of particle and gaseous emissions: to this purpose Best Available Techniques (BAT) are used in the flue-gas treatment sections leading to pollutant emission lower than established threshold limit values. As regard particle emission, only a mass-based threshold limit is required by the regulatory authorities. However; in the last years the attention of medical experts moved from coarse and fine particles towards ultrafine particles (UFPs; diameter less than 0.1 microm), mainly emitted by combustion processes. According to toxicological and epidemiological studies, ultrafine particles could represent a risk for health and environment. Therefore, it is necessary to quantify particle emissions from incinerators also to perform an exposure assessment for the human populations living in their surrounding areas. A further topic to be stressed in the UFP emission from incinerators is the particle filtration efficiency as function of different flue-gas treatment sections. In fact, it could be somehow important to know which particle filtration method is able to assure high abatement efficiency also in terms of UFPs. To this purpose, in the present work experimental results in terms of ultrafine particle emissions from several incineration plants are reported. Experimental campaigns were carried out in the period 2007-2010 by measuring UFP number distributions and total concentrations at the stack of five plants through condensation particle counters and mobility particle sizer spectrometers. Average total particle number concentrations ranging from 0.4 x 10(3) to 6.0 x 10(3) particles cm(-3) were measured at the stack of the analyzed plants. Further experimental campaigns were performed to characterize particle levels before the fabric filters in two of the analyzed plants in order to deepen their particle reduction effect; particle concentrations higher than 1 x 10(7) particles cm(-3) were measured, leading to filtration efficiency greater than 99.99%. PMID:22393815
NASA Astrophysics Data System (ADS)
Erdogan, Eren; Onur Karslioglu, Mahmut; Durmaz, Murat; Aghakarimi, Armin
2014-05-01
In this study, particle filter (PF) which is mainly based on the Monte Carlo simulation technique has been carried out for polynomial modeling of the local ionospheric conditions above the selected ground based stations. Less sensitivity to the errors caused by linearization of models and the effect of unknown or unmodeled components in the system model is one of the advantages of the particle filter as compared to the Kalman filter which is commonly used as a recursive filtering method in VTEC modeling. Besides, probability distribution of the system models is not necessarily required to be Gaussian. In this work third order polynomial function has been incorporated into the particle filter implementation to represent the local VTEC distribution. Coefficients of the polynomial model presenting the ionospheric parameters and the receiver inter frequency biases are the unknowns forming the state vector which has been estimated epoch-wise for each ground station. To consider the time varying characteristics of the regional VTEC distribution, dynamics of the state vector parameters changing permanently have been modeled using the first order Gauss-Markov process. In the processing of the particle filtering, multi-variety probability distribution of the state vector through the time has been approximated by means of randomly selected samples and their associated weights. A known drawback of the particle filtering is that the increasing number of the state vector parameters results in an inefficient filter performance and requires more samples to represent the probability distribution of the state vector. Considering the total number of unknown parameters for all ground stations, estimation of these parameters which were inserted into a single state vector has caused the particle filter to produce inefficient results. To solve this problem, the PF implementation has been carried out separately for each ground station at current time epochs. After estimation of unknown parameters, Ionospheric VTEC map covering the predefined region has been produced by interpolation. VTEC values at a grid node of the map have been computed based on the four closest ground stations by means of inverse distance squared weighted average. The GPS data which is acquired from ground based stations have been made available from the International GNSS Service (IGS) and the Reference Frame Sub-commission for Europe (EUREF). Raw GPS observations have been preprocessed to detect cycle slips and to form geometry-free linear combinations of observables for each continuous arc. Then the obtained pseudoranges have been smoothed with the carrier to code leveling method. Finally, the performance of the particle filter to investigate the local characteristics of the ionospheric Vertical Total Electron Content (VTEC) has been evaluated and the result has been compared with the result of a standard Kalman filter. Keywords: ionosphere, GPS , Particle filer, VTEC modeling
NASA Astrophysics Data System (ADS)
Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang
2015-11-01
In this paper, a model of topology optimization with linear buckling constraints is established based on an independent and continuous mapping method to minimize the plate/shell structure weight. A composite exponential function (CEF) is selected as filtering functions for element weight, the element stiffness matrix and the element geometric stiffness matrix, which recognize the design variables, and to implement the changing process of design variables from "discrete" to "continuous" and back to "discrete". The buckling constraints are approximated as explicit formulations based on the Taylor expansion and the filtering function. The optimization model is transformed to dual programming and solved by the dual sequence quadratic programming algorithm. Finally, three numerical examples with power function and CEF as filter function are analyzed and discussed to demonstrate the feasibility and efficiency of the proposed method.
Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Atkins, E.; Chorin, A. J.
2011-12-01
The task in data assimilation is to identify the state of a system from an uncertain model supplemented by a stream of incomplete and noisy data. The model is typically given in form of a discretization of an Ito stochastic differential equation (SDE), x(n+1) = R(x(n))+ G W(n), where x is an m-dimensional vector and n=0,1,2,.... The m-dimensional vector function R and the m x m matrix G depend on the SDE as well as on the discretization scheme, and W is an m-dimensional vector whose elements are independent standard normal variates. The data are y(n) = h(x(n))+QV(n) where h is a k-dimensional vector function, Q is a k x k matrix and V is a vector whose components are independent standard normal variates. One can use statistics of the conditional probability density (pdf) of the state given the observations, p(n+1)=p(x(n+1)|y(1), ... , y(n+1)), to identify the state x(n+1). Particle filters approximate p(n+1) by sequential Monte Carlo and rely on the recursive formulation of the target pdf, p(n+1)?p(x(n+1)|x(n)) p(y(n+1)|x(n+1)). The pdf p(x(n+1)|x(n)) can be read off of the model equations to be a Gaussian with mean R(x(n)) and covariance matrix ? = GG^T, where the T denotes a transposed; the pdf p(y(n+1)|x(n+1)) is a Gaussian with mean h(x(n+1)) and covariance QQ^T. In a sampling-importance-resampling (SIR) filter one samples new values for the particles from a prior pdf and then one weighs these samples with weights determined by the observations, to yield an approximation to p(n+1). Such weighting schemes often yield small weights for many of the particles. Implicit particle filtering overcomes this problem by using the observations to generate the particles, thus focusing attention on regions of large probability. A suitable algebraic equation that depends on the model and the observations is constructed for each particle, and its solution yields high probability samples of p(n+1). In the current formulation of the implicit particle filter, the state covariance matrix ? is assumed to be non-singular. In the present work we consider the case where the covariance ? is singular. This happens in particular when the noise is spatially smooth and can be represented by a small number of Fourier coefficients, as is often the case in geophysical applications. We derive an implicit filter for this problem and show that it is very efficient, because the filter operates in a space whose dimension is the rank of ?, rather than the full model dimension. We compare the implicit filter to SIR, to the Ensemble Kalman Filter and to variational methods, and also study how information from data is propagated from observed to unobserved variables. We illustrate the theory on two coupled nonlinear PDE's in one space dimension that have been used as a test-bed for geomagnetic data assimilation. We observe that the implicit filter gives good results with few (2-10) particles, while SIR requires thousands of particles for similar accuracy. We also find lower limits to the accuracy of the filter's reconstruction as a function of data availability.
Image quality and dose optimization using novel x-ray source filters tailored to patient size
NASA Astrophysics Data System (ADS)
Toth, Thomas L.; Cesmeli, Erdogan; Ikhlef, Aziz; Horiuchi, Tetsuya
2005-04-01
The expanding set of CT clinical applications demands increased attention to obtaining the maximum image quality at the lowest possible dose. Pre-patient beam shaping filters provide an effective means to improve dose utilization. In this paper we develop and apply characterization methods that lead to a set of filters appropriately matched to the patient. We developed computer models to estimate image noise and a patient size adjusted CTDI dose. The noise model is based on polychromatic X-ray calculations. The dose model is empirically derived by fitting CTDI style dose measurements for a demographically representative set of phantom sizes and shapes with various beam shaping filters. The models were validated and used to determine the optimum IQ vs dose for a range of patient sizes. The models clearly show that an optimum beam shaping filter exists as a function of object diameter. Based on noise and dose alone, overall dose efficiency advantages of 50% were obtained by matching the filter shape to the size of the object. A set of patient matching filters are used in the GE LightSpeed VCT and Pro32 to provide a practical solution for optimum image quality at the lowest possible dose over the range of patient sizes and clinical applications. Moreover, these filters mark the beginning of personalized medicine where CT scanner image quality and radiation dose utilization is truly individualized and optimized to the patient being scanned.
NASA Astrophysics Data System (ADS)
Ding, Ze-Min; Chen, Lin-Gen; Ge, Yan-Lin; Sun, Feng-Rui
2016-04-01
A theoretical model for energy selective electron (ESE) heat pumps operating with two-dimensional electron reservoirs is established in this study. In this model, a double-resonance energy filter operating with a total momentum filtering mechanism is considered for the transmission of electrons. The optimal thermodynamic performance of the ESE heat pump devices is also investigated. Numerical calculations show that the heating load of the device with two resonances is larger, whereas the coefficient of performance (COP) is lower than the ESE heat pump when considering a single-resonance filter. The performance characteristics of the ESE heat pumps in the total momentum filtering condition are generally superior to those with a conventional filtering mechanism. In particular, the performance characteristics of the ESE heat pumps considering a conventional filtering mechanism are vastly different from those of a device with total momentum filtering, which is induced by extra electron momentum in addition to the horizontal direction. Parameters such as resonance width and energy spacing are found to be associated with the performance of the electron system.
Gaussian mixture sigma-point particle filter for optical indoor navigation system
NASA Astrophysics Data System (ADS)
Zhang, Weizhi; Gu, Wenjun; Chen, Chunyi; Chowdhury, M. I. S.; Kavehrad, Mohsen
2013-12-01
With the fast growing and popularization of smart computing devices, there is a rise in demand for accurate and reliable indoor positioning. Recently, systems using visible light communications (VLC) technology have been considered as candidates for indoor positioning applications. A number of researchers have reported that VLC-based positioning systems could achieve position estimation accuracy in the order of centimeter. This paper proposes an Indoors navigation environment, based on visible light communications (VLC) technology. Light-emitting-diodes (LEDs), which are essentially semiconductor devices, can be easily modulated and used as transmitters within the proposed system. Positioning is realized by collecting received-signal-strength (RSS) information on the receiver side, following which least square estimation is performed to obtain the receiver position. To enable tracking of user's trajectory and reduce the effect of wild values in raw measurements, different filters are employed. In this paper, by computer simulations we have shown that Gaussian mixture Sigma-point particle filter (GM-SPPF) outperforms other filters such as basic Kalman filter and sequential importance-resampling particle filter (SIR-PF), at a reasonable computational cost.
Alderman, Steven L; Parsons, Michael S; Hogancamp, Kristina U; Waggoner, Charles A
2008-11-01
High-efficiency particulate air (HEPA) filters are widely used to control particulate matter emissions from processes that involve management or treatment of radioactive materials. Section FC of the American Society of Mechanical Engineers AG-1 Code on Nuclear Air and Gas Treatment currently restricts media velocity to a maximum of 2.5 cm/sec in any application where this standard is invoked. There is some desire to eliminate or increase this media velocity limit. A concern is that increasing media velocity will result in higher emissions of ultrafine particles; thus, it is unlikely that higher media velocities will be allowed without data to demonstrate the effect of media velocity on removal of ultrafine particles. In this study, the performance of nuclear grade HEPA filters, with respect to filter efficiency and most penetrating particle size, was evaluated as a function of media velocity. Deep-pleat nuclear grade HEPA filters (31 cm x 31 cm x 29 cm) were evaluated at media velocities ranging from 2.0 to 4.5 cm/sec using a potassium chloride aerosol challenge having a particle size distribution centered near the HEPA filter most penetrating particle size. Filters were challenged under two distinct mass loading rate regimes through the use of or exclusion of a 3 microm aerodynamic diameter cut point cyclone. Filter efficiency and most penetrating particle size measurements were made throughout the duration of filter testing. Filter efficiency measured at the onset of aerosol challenge was noted to decrease with increasing media velocity, with values ranging from 99.999 to 99.977%. The filter most penetrating particle size recorded at the onset of testing was noted to decrease slightly as media velocity was increased and was typically in the range of 110-130 nm. Although additional testing is needed, these findings indicate that filters operating at media velocities up to 4.5 cm/sec will meet or exceed current filter efficiency requirements. Additionally, increased emission of ultrafine particles is seemingly negligible. PMID:18726819
Optimal Control for a Parallel Hybrid Hydraulic Excavator Using Particle Swarm Optimization
Wang, Dong-yun; Guan, Chen
2013-01-01
Optimal control using particle swarm optimization (PSO) is put forward in a parallel hybrid hydraulic excavator (PHHE). A power-train mathematical model of PHHE is illustrated along with the analysis of components' parameters. Then, the optimal control problem is addressed, and PSO algorithm is introduced to deal with this nonlinear optimal problem which contains lots of inequality/equality constraints. Then, the comparisons between the optimal control and rule-based one are made, and the results show that hybrids with the optimal control would increase fuel economy. Although PSO algorithm is off-line optimization, still it would bring performance benchmark for PHHE and also help have a deep insight into hybrid excavators. PMID:23818832
Estimation of the Dynamic States of Synchronous Machines Using an Extended Particle Filter
Zhou, Ning; Meng, Da; Lu, Shuai
2013-11-11
In this paper, an extended particle filter (PF) is proposed to estimate the dynamic states of a synchronous machine using phasor measurement unit (PMU) data. A PF propagates the mean and covariance of states via Monte Carlo simulation, is easy to implement, and can be directly applied to a non-linear system with non-Gaussian noise. The extended PF modifies a basic PF to improve robustness. Using Monte Carlo simulations with practical noise and model uncertainty considerations, the extended PFâ€™s performance is evaluated and compared with the basic PF and an extended Kalman filter (EKF). The extended PF results showed high accuracy and robustness against measurement and model noise.
Applying a fully nonlinear particle filter on a coupled ocean-atmosphere climate model
NASA Astrophysics Data System (ADS)
Browne, Philip; van Leeuwen, Peter Jan; Wilson, Simon
2014-05-01
It is a widely held assumption that particle filters are not applicable in high-dimensional systems due to filter degeneracy, commonly called the curse of dimensionality. This is only true of naive particle filters, and indeed it has been shown much more advanced methods perform particularly well on systems of dimension up to 216 â‰¡ 6.5 Ã— 104. In this talk we will present results from using the equivalent weights particle filter in twin experiments with the global climate model HadCM3. These experiments have a number of notable features. Firstly the sheer size of model in use is substantially larger than has been previously achieved. The model has state dimension approximately 4 Ã— 106 and approximately 4 Ã— 104 observations per analysis step. This is 2 orders of magnitude more than has been achieved with a particle filter in the geosciences. Secondly, the use of a fully nonlinear data assimilation technique to initialise a climate model gives us the possibility to find non-Gaussian estimates for the current state of the climate. In doing so we may find that the same model may demonstrate multiple likely scenarios for forecasts on a multi-annular/decadal timescale. The experiments consider to assimilating artificial sea surface temperatures daily for several years. We will discuss how an ensemble based method for assimilation in a coupled system avoids issues faced by variational methods. Practical details of how the experiments were carried out, specifically the use of the EMPIRE data assimilation framework, will be discussed. The results from applying the nonlinear data assimilation method can always be improved through having a better representation of the model error covariance matrix. We will discuss the representation which we have used for this matrix, and in particular, how it was generated from the coupled system.
Continuous collection of soluble atmospheric particles with a wetted hydrophilic filter.
Takeuchi, Masaki; Ullah, S M Rahmat; Dasgupta, Purnendu K; Collins, Donald R; Williams, Allen
2005-12-15
Approximately one-third of the area (14-mm diameter of a 25-mm diameter) of a 5-microm uniform pore size polycarbonate filter is continuously wetted by a 0.25 mL/min water mist. The water forms a continuous thin film on the filter and percolates through it. The flowing water substantially reduces the effective pore size of the filter. At the operational air sampling flow rate of 1.5 standard liters per minute, such a particle collector (PC) efficiently captures particles down to very small size. As determined by fluorescein-tagged NaCl aerosol generated by a vibrating orifice aerosol generator, the capture efficiency was 97.7+% for particle aerodynamic diameters ranging from 0.28 to 3.88 microm. Further, 55.3 and 80.3% of 25- and 100-nm (NH4)2SO4 particles generated by size classification with a differential mobility analyzer were respectively collected by the device. The PC is integrally coupled with a liquid collection reservoir. The liquid effluent from the wetted filter collector, bearing the soluble components of the aerosol, can be continuously collected or periodically withdrawn. The latter strategy permits the use of a robust syringe pump for the purpose. Coupled with a PM2.5 cyclone inlet and a membrane-based parallel plate denuder at the front end and an ion chromatograph at the back end, the PC readily operated for at least 4-week periods without filter replacement or any other maintenance. PMID:16351153
Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude
NASA Technical Reports Server (NTRS)
Sedlak, J.
1994-01-01
Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.
Design Optimization of Vena Cava Filters: An application to dual filtration devices
Singer, M A; Wang, S L; Diachin, D P
2009-12-03
Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.
Kornelakis, Aris
2010-12-15
Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Value (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)
NASA Astrophysics Data System (ADS)
Bostater, Charles R., Jr.
2006-09-01
This paper describes a wavelet based approach to derivative spectroscopy. The approach is utilized to select, through optimization, optimal channels or bands to use as derivative based remote sensing algorithms. The approach is applied to airborne and modeled or synthetic reflectance signatures of environmental media and features or objects within such media, such as benthic submerged vegetation canopies. The technique can also applied to selected pixels identified within a hyperspectral image cube obtained from an board an airborne, ground based, or subsurface mobile imaging system. This wavelet based image processing technique is an extremely fast numerical method to conduct higher order derivative spectroscopy which includes nonlinear filter windows. Essentially, the wavelet filter scans a measured or synthetic signature in an automated sequential manner in order to develop a library of filtered spectra. The library is utilized in real time to select the optimal channels for direct algorithm application. The unique wavelet based derivative filtering technique makes us of a translating, and dilating derivative spectroscopy signal processing (TDDS-SP (R)) approach based upon remote sensing science and radiative transfer processes unlike other signal processing techniques applied to hyperspectral signatures.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-01-01
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted â€œusefulâ€ data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-01-01
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247
Smith, D.H.; Powell, V.; Ibrahim, E.; Ferer, M.; Ahmadi, G.
1996-12-31
The use of cylindrical candle filters to remove fine ({approx}0.005 mm) particles from hot ({approx}500- 900{degrees}C) gas streams currently is being developed for applications in advanced pressurized fluidized bed combustion (PFBC) and integrated gasification combined cycle (IGCC) technologies. Successfully deployed with hot-gas filtration, PFBC and IGCC technologies will allow the conversion of coal to electrical energy by direct passage of the filtered gases into non-ruggedized turbines and thus provide substantially greater conversion efficiencies with reduced environmental impacts. In the usual approach, one or more clusters of candle filters are suspended from a tubesheet in a pressurized (P {approx_lt}1 MPa) vessel into which hot gases and suspended particles enter, the gases pass through the walls of the cylindrical filters, and the filtered particles form a cake on the outside of each filter. The cake is then removed periodically by a backpulse of compressed air from inside the filter, which passes through the filter wall and filter cake. In various development or demonstration systems the thickness of the filter cake has proved to be an important, but unknown, process parameter. This paper describes a physical model for cake and pressure buildups between cleaning backpulses, and for longer term buildups of the ``baseline`` pressure drop, as caused by incomplete filter cleaning and/or re-entrainment. When combined with operating data and laboratory measurements of the cake porosity, the model may be used to calculate the (average) filter permeability, the filter-cake thickness and permeability, and the fraction of filter-cake left on the filter by the cleaning backpulse or re-entrained after the backpulse. When used for a variety of operating conditions (e.g., different coals, sorbents, temperatures, etc.), the model eventually may provide useful information on how the filter-cake properties depend on the various operating parameters.
An optimal linear filter for the reduction of noise superimposed to the EEG signal.
Bartoli, F; Cerutti, S
1983-10-01
In the present paper a procedure for the reduction of super-imposed noise on EEG tracings is described, which makes use of linear digital filtering and identification methods. In particular, an optimal filter (a Kalman filter) has been developed which is intended to capture the disturbances of the electromyographic noise on the basis of an a priori modelling which considers a series of impulses with a temporal occurrence according to a Poisson distribution as a noise generating mechanism. The experimental results refer to the EEG tracings recorded from 20 patients in normal resting conditions: the procedure consists of a preprocessing phase (which uses also a low-pass FIR digital filter), followed by the implementation of the identification and the Kalman filter. The performance of the filters is satisfactory also from the clinical standpoint, obtaining a marked reduction of noise without distorting the useful information contained in the signal. Furthermore, when using the introduced method, the EEG signal generating mechanism is accordingly parametrized as AR/ARMA models, thus obtaining an extremely sensitive feature extraction with interesting and not yet completely studied pathophysiological meanings. The above procedure may find a general application in the field of noise reduction and the better enhancement of information contained in the wide set of biological signals. PMID:6632838
NASA Astrophysics Data System (ADS)
Baroncini, F.; Castelli, F.
2009-09-01
Data assimilation techniques based on Ensemble Filtering are widely regarded as the best approach in solving forecast and calibration problems in geophysics models. Often the implementation of statistical optimal techniques, like the Ensemble Kalman Filter, is unfeasible because of the large amount of replicas used in each time step of the model for updating the error covariance matrix. Therefore the sub optimal approach seems to be a more suitable choice. Various sub-optimal techniques were tested in atmospheric and oceanographic models, some of them are based on the detection of a "null space". Distributed Hydrologic Models differ from the other geo-fluid-dynamics models in some fundamental aspects that make complex to understanding the relative efficiency of the different suboptimal techniques. Those aspects include threshold processes , preferential trajectories for convection and diffusion, low observability of the main state variables and high parametric uncertainty. This research study is focused on such topics and explore them through some numerical experiments on an continuous hydrologic model, MOBIDIC. This model include both water mass balance and surface energy balance, so it's able to assimilate a wide variety of datasets like traditional hydrometric "on ground" measurements or land surface temperature retrieval from satellite. The experiments that we present concern to a basin of 700 kmq in center Italy, with hourly dataset on a 8 months period that includes both drought and flood events, in this first set of experiment we worked on a low spatial resolution version of the hydrologic model (3.2 km). A new Kalman Filter based algorithm is presented : this filter try to address the main challenges of hydrological modeling uncertainty. The proposed filter use in Forecast step a COFFEE (Complementary Orthogonal Filter For Efficient Ensembles) approach with a propagation of both deterministic and stochastic ensembles to improve robustness and convergence proprieties. After, through a P.O.D. Reduction from control theory, we compute a Reduced Order Forecast Covariance matrix . In analysis step the filter uses a LE (Local Ensemble) Kalman Filter approach. We modify the LE Kalman Filter assimilation scheme and we adapt its formulation to the P.O.D. Reduced sub-space propagated in forecast step. Through this, assimilation of observations is made only in the maximum covariance directions of the model error. Then the efficiency of this technique is weighed in term of hydrometric forecast accuracy in a preliminary convergence test of a synthetic rainfall event toward a real rain fall event.
MÃ¼ller, D; Pagel, R; Burkert, A; Wagner, V; Paa, W
2014-03-20
Filtered Rayleigh scattering (FRS) is applied to determine two-dimensional temperature distributions in a hexamethyldisiloxane loaded propane/air flame intended for combustion chemical vapor deposition processes. An iodine cell as a molecular filter suppresses background scattering, e.g., by particles, while the wings of the spectrally broadened Rayleigh scattering can pass this filter. A frequency-doubled Nd:YAG laser is tuned to a strong absorption line of iodine. The gas temperature is deduced from the transmitted Rayleigh scattering signal. Since FRS also depends on molecule-specific scattering cross sections, the local gas composition of majority species is measured using the Raman scattering technique. Limits and restrictions are discussed. PMID:24663450
Savran, Arman; Cao, Houwei; Shah, Miraj; Nenkova, Ani; Verma, Ragini
2012-01-01
We present experiments on fusing facial video, audio and lexical indicators for affect estimation during dyadic conversations. We use temporal statistics of texture descriptors extracted from facial video, a combination of various acoustic features, and lexical features to create regression based affect estimators for each modality. The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affect states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. Tested on the Audio-visual Emotion Recognition Challenge dataset, our single modality estimators achieve substantially higher scores than the official baseline method for every dimension of affect. Our filtering-based multi-modality fusion achieves correlation performance of 0.344 (baseline: 0.136) and 0.280 (baseline: 0.096) for the fully continuous and word level sub challenges, respectively. PMID:25300451
Savran, Arman; Cao, Houwei; Shah, Miraj; Nenkova, Ani; Verma, Ragini
2013-01-01
We present experiments on fusing facial video, audio and lexical indicators for affect estimation during dyadic conversations. We use temporal statistics of texture descriptors extracted from facial video, a combination of various acoustic features, and lexical features to create regression based affect estimators for each modality. The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affect states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. Tested on the Audio-visual Emotion Recognition Challenge dataset, our single modality estimators achieve substantially higher scores than the official baseline method for every dimension of affect. Our filtering-based multi-modality fusion achieves correlation performance of 0.344 (baseline: 0.136) and 0.280 (baseline: 0.096) for the fully continuous and word level sub challenges, respectively. PMID:25300451
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
Jaeschke, B C; Lind, O C; Bradshaw, C; Salbu, B
2015-01-01
Radioactive particles are aggregates of radioactive atoms that may contain significant activity concentrations. They have been released into the environment from nuclear weapons tests, and from accidents and effluents associated with the nuclear fuel cycle. Aquatic filter-feeders can capture and potentially retain radioactive particles, which could then provide concentrated doses to nearby tissues. This study experimentally investigated the retention and effects of radioactive particles in the blue mussel, Mytilus edulis. Spent fuel particles originating from the Dounreay nuclear establishment, and collected in the field, comprised a U and Al alloy containing fission products such as (137)Cs and (90)Sr/(90)Y. Particles were introduced into mussels in suspension with plankton-food or through implantation in the extrapallial cavity. Of the particles introduced with food, 37% were retained for 70 h, and were found on the siphon or gills, with the notable exception of one particle that was ingested and found in the stomach. Particles not retained seemed to have been actively rejected and expelled by the mussels. The largest and most radioactive particle (estimated dose rate 3.18 Â± 0.06 Gyh(-1)) induced a significant increase in Comet tail-DNA %. In one case this particle caused a large white mark (suggesting necrosis) in the mantle tissue with a simultaneous increase in micronucleus frequency observed in the haemolymph collected from the muscle, implying that non-targeted effects of radiation were induced by radiation from the retained particle. White marks found in the tissue were attributed to ionising radiation and physical irritation. The results indicate that current methods used for risk assessment, based upon the absorbed dose equivalent limit and estimating the "no-effect dose" are inadequate for radioactive particle exposures. Knowledge is lacking about the ecological implications of radioactive particles released into the environment, for example potential recycling within a population, or trophic transfer in the food chain. PMID:25240099
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
NASA Astrophysics Data System (ADS)
Aggarwal, Priyanka; Syed, Zainab; El-Sheimy, Naser
2009-05-01
Navigation includes the integration of methodologies and systems for estimating time-varying position, velocity and attitude of moving objects. Navigation incorporating the integrated inertial navigation system (INS) and global positioning system (GPS) generally requires extensive evaluations of nonlinear equations involving double integration. Currently, integrated navigation systems are commonly implemented using the extended Kalman filter (EKF). The EKF assumes a linearized process, measurement models and Gaussian noise distributions. These assumptions are unrealistic for highly nonlinear systems like land vehicle navigation and may cause filter divergence. A particle filter (PF) is developed to enhance integrated INS/GPS system performance as it can easily deal with nonlinearity and non-Gaussian noises. In this paper, a hybrid extended particle filter (HEPF) is developed as an alternative to the well-known EKF to achieve better navigation data accuracy for low-cost microelectromechanical system sensors. The results show that the HEPF performs better than the EKF during GPS outages, especially when simulated outages are located in periods with high vehicle dynamics.
Wang, Bo; Xiao, Xuan; Xia, Yuanqing; Fu, Mengyin
2013-01-01
Shipboard is not an absolute rigid body. Many factors could cause deformations which lead to large errors of mounted devices, especially for the navigation systems. Such errors should be estimated and compensated effectively, or they will severely reduce the navigation accuracy of the ship. In order to estimate the deformation, an unscented particle filter method for estimation of shipboard deformation based on an inertial measurement unit is presented. In this method, a nonlinear shipboard deformation model is built. Simulations demonstrated the accuracy reduction due to deformation. Then an attitude plus angular rate match mode is proposed as a frame to estimate the shipboard deformation using inertial measurement units. In this frame, for the nonlinearity of the system model, an unscented particle filter method is proposed to estimate and compensate the deformation angles. Simulations show that the proposed method gives accurate and rapid deformation estimations, which can increase navigation accuracy after compensation of deformation. PMID:24248280
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel
2004-01-01
Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.
Highest probability data association and particle filtering for target tracking in clutter
NASA Astrophysics Data System (ADS)
Song, Taek Lyul; Kim, Da Sol
2005-12-01
There proposed a new method of data association called highest probability data association (HPDA) combined with particle filtering and applied to passive sonar tracking in clutter. The HPDA method evaluated the probabilities of one-to-one assignments of measurement-to-track. All of the bearing measurements at the present sampling instance were lined up in the order of signal strength. The measurement with the highest probability was selected to be target-originated and the measurement was used for probabilistic weight update of particle filtering. The proposed HPDA algorithm can be easily extended to multi-target tracking problems. It can be used to avoid track coalescence phenomenon that prevails when several tracks move very close together.
Particle Filters for Real-Time Fault Detection in Planetary Rovers
NASA Technical Reports Server (NTRS)
Dearden, Richard; Clancy, Dan; Koga, Dennis (Technical Monitor)
2001-01-01
Planetary rovers provide a considerable challenge for robotic systems in that they must operate for long periods autonomously, or with relatively little intervention. To achieve this, they need to have on-board fault detection and diagnosis capabilities in order to determine the actual state of the vehicle, and decide what actions are safe to perform. Traditional model-based diagnosis techniques are not suitable for rovers due to the tight coupling between the vehicle's performance and its environment. Hybrid diagnosis using particle filters is presented as an alternative, and its strengths and weakeners are examined. We also present some extensions to particle filters that are designed to make them more suitable for use in diagnosis problems.
NASA Astrophysics Data System (ADS)
Huang, Haibin; Zhuang, Yufei
2015-08-01
This paper proposes a method that plans energy-optimal trajectories for multi-satellite formation reconfiguration in deep space environment. A novel co-evolutionary particle swarm optimization algorithm is stated to solve the nonlinear programming problem, so that the computational complexity of calculating the gradient information could be avoided. One swarm represents one satellite, and through communication with other swarms during the evolution, collisions between satellites can be avoided. In addition, a dynamic depth first search algorithm is proposed to solve the redundant search problem of a co-evolutionary particle swarm optimization method, with which the computation time can be shorten a lot. In order to make the actual trajectories optimal and collision-free with disturbance, a re-planning strategy is deduced for formation reconfiguration maneuver.
Optimal Pid Tuning for Power System Stabilizers Using Adaptive Particle Swarm Optimization Technique
NASA Astrophysics Data System (ADS)
Oonsivilai, Anant; Marungsri, Boonruang
2008-10-01
An application of the intelligent search technique to find optimal parameters of power system stabilizer (PSS) considering proportional-integral-derivative controller (PID) for a single-machine infinite-bus system is presented. Also, an efficient intelligent search technique, adaptive particle swarm optimization (APSO), is engaged to express usefulness of the intelligent search techniques in tuning of the PIDâ€”PSS parameters. Improve damping frequency of system is optimized by minimizing an objective function with adaptive particle swarm optimization. At the same operating point, the PIDâ€”PSS parameters are also tuned by the Ziegler-Nichols method. The performance of proposed controller compared to the conventional Ziegler-Nichols PID tuning controller. The results reveal superior effectiveness of the proposed APSO based PID controller.
Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing
2015-01-01
An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate. PMID:26064085
Advanced particle filter. Technical progress report No. 19, January 1995--March 1995
1995-08-01
Tidd advanced particle filtration (APF) test runs 25 through 34 were completed during the first quarter of 1995. All Tidd testing was completed with the conclusion of APF test run 34 on 3/30/95. The Westinghouse activities supporting the APF operation during this quarter included processing of test data and participating in one APF borescope inspection. Data is included on the filter operation.
Segmentation of nerve bundles and ganglia in spine MRI using particle filters.
Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina
2011-01-01
Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741
Segmentation of Nerve Bundles and Ganglia in Spine MRI Using Particle Filters
Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina
2011-01-01
Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741
On optimal filtering of GPS dual frequency observations without using orbit information
NASA Technical Reports Server (NTRS)
Eueler, Hans-Juergen; Goad, Clyde C.
1991-01-01
The concept of optimal filtering of observations collected with a dual frequency GPS P-code receiver is investigated in comparison to an approach for C/A-code units. The filter presented here uses only data gathered between one receiver and one satellite. The estimated state vector consists of a one-way pseudorange, ionospheric influence, and ambiguity biases. Neither orbit information nor station information is required. The independently estimated biases are used to form double differences where, in case of a P-code receiver, the wide lane integer ambiguities are usually recovered successfully except when elevation angles are very small. An elevation dependent uncertainty for pseudorange measurements was discovered for different receiver types. An exponential model for the pseudorange uncertainty was used with success in the filter gain computations.
Design of FIR Filters with Discrete Coefficients using Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Tsutsumi, Shuntaro; Suyama, Kenji
In this paper, we propose a new design method for linear phase FIR (Finite Impulse Response) filters with discrete coefficients. In a hardware implementation, filter coefficients must be represented as discrete values. The design problem of digital filters with discrete coefficients is formulated as the integer programming problem. Then, an enormous amount of computational time is required to solve the problem in a strict solver. Recently, ACO (Ant Colony Optimization) which is one heuristic approach, is used widely for solving combinational problem like the traveling salesman problem. In our method, we formulate the design problem as the 0-1 integer programming problem and solve it by using the ACO. Several design examples are shown to present effectiveness of the proposed method.
Optimal Design of CSD Coefficient FIR Filters Subject to Number of Nonzero Digits
NASA Astrophysics Data System (ADS)
Ozaki, Yuichi; Suyama, Kenji
In a hardware implementation of FIR(Finite Impulse Response) digital filters, it is desired to reduce a total number of nonzero digits used for a representation of filter coefficients. In general, a design problem of FIR filters with CSD(Canonic Signed Digit) representation, which is efficient one for the reduction of numbers of multiplier units, is often considered as one of the 0-1 combinational problems. In such the problem, some difficult constraints make us prevent to linearize the problem. Although many kinds of heuristic approaches have been applied to solve the problem, the solution obtained by such a manner could not guarantee its optimality. In this paper, we attempt to formulate the design problem as the 0-1 mixed integer linear programming problem and solve it by using the branch and bound technique, which is a powerful method for solving integer programming problem. Several design examples are shown to present an efficient performance of the proposed method.
NASA Astrophysics Data System (ADS)
Xu, Chuanlong; Tang, Guanghua; Zhou, Bin; Yang, Daoye; Zhang, Jianyong; Wang, Shimin
2007-06-01
Electrostatic induction theory based spatial filtering method for particle velocity measurement has the advantages of the simplicity of measurement system and of the convenience of data processing. In this paper, the relationship between solid particle velocity and the power spectrum of the output signal of the electrostatic senor was derived theoretically. And the effects of the length of the electrode, the thickness of the dielectric pipe and its length on the spatial filtering characteristics of the electrostatic sensor were investigated numerically using finite element method. Additionally, as for the roughness and the difficult determination of the peak frequency fmax of the power spectrum characteristics curve of the output signal, a wavelet analysis based filtering method was adopted to smooth the curve, which can determine peak frequency fmax accurately. Finally, the velocity measurement method was applied in a dense phase pneumatic conveying system under high pressure, and the experimental results show that the system repeatability is within ±4% over the gas superficial velocity range of 8.63-18.62 m/s for particle concentration range 0.067-0.130 m3/m3.
NASA Astrophysics Data System (ADS)
Xu, Chuanlong; Tang, Guanghua; Zhou, Bin; Wang, Shimin
2009-04-01
The spatial filtering method for particle velocity measurement has the advantages of simplicity of the measurement system and convenience of data processing. In this paper, the relationship between solid particles mean velocity in a pneumatic pipeline and the power spectrum of the output signal of an electrostatic sensor was mathematically modeled. The effects of the length of the sensor, the thickness of the dielectric pipe and its length on the spatial filtering characteristics of the sensor were also investigated using the finite element method. As for the roughness of and the difficult determination of the peak frequency fmax of the power spectrum characteristics of the output signal of the sensor, a wavelet analysis based filtering method was applied to smooth the curve, which can accurately determine the peak frequency fmax. Finally, experiments were performed on a pilot dense phase pneumatic conveying rig at high pressure to test the performance of the velocity measurement system. The experimental results show that the system repeatability is within ±4% over a gas superficial velocity range of 8.63-18.62 m s-1 for a particle concentration range of 0.067-0.130 m3 m-3.
Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan
Duan, Zhuohua; Cai, Zixing; Min, Huaqing
2014-01-01
Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318
Robust dead reckoning system for mobile robots based on particle filter and raw range scan.
Duan, Zhuohua; Cai, Zixing; Min, Huaqing
2014-01-01
Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318
Parallel global optimization with the particle swarm algorithm.
Schutte, J F; Reinbolt, J A; Fregly, B J; Haftka, R T; George, A D
2004-12-01
Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima-large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available. PMID:17891226
Optimal filtering of gear signals for early damage detection based on the spectral kurtosis
NASA Astrophysics Data System (ADS)
Combet, F.; Gelman, L.
2009-04-01
In this paper, we propose a methodology for the enhancement of small transients in gear vibration signals in order to detect local tooth faults, such as pitting, at an early stage of damage. We propose to apply the optimal denoising (Wiener) filter based on the spectral kurtosis (SK). The originality is to estimate and apply this filter to the gear residual signal, as classically obtained after removing the mesh harmonics from the time synchronous average (TSA). This presents several advantages over the direct estimation from the raw vibration signal: improved signal/noise ratio, reduced interferences from other stages of the gearbox and easier detection of excited structural resonance(s) within the range of the mesh harmonic components. From the SK-based filtered residual signal, called SK-residual, we define the local power as the smoothed squared envelope, which reflects both the energy and the degree of non-stationarity of the fault-induced transients. The methodology is then applied to an industrial case and shows the possibility of detection of relatively small tooth surface pitting (less than 10%) in a two-stage helical reduction gearbox. The adjustment of the resolution for the SK estimation appears to be optimal when the length of the analysis window is approximately matched with the mesh period of the gear. The proposed approach is also compared to an inverse filtering (blind deconvolution) approach. However, the latter turns out to be more unstable and sensitive to noise and shows a lower degree of separation, quantified by the Fisher criterion, between the estimated diagnostic features in the pitted and unpitted cases. Thus, the proposed optimal filtering methodology based on the SK appears to be well adapted for the early detection of local tooth damage in gears.
Optimized SU-8 UV-lithographical process for a Ka-band filter fabrication
NASA Astrophysics Data System (ADS)
Jin, Peng; Jiang, Kyle; Tan, Jiubin; Lancaster, M. J.
2005-04-01
Rapidly expanding of millimeter wave communication has made Ka-band filter fabrication to gain more and more attention from the researcher. Described in this paper is a high quality UV-lithographic process for making high aspect ratio parts of a coaxial Ka band dual mode filter using an ultra-thick SU-8 photoresist layer, which has a potential application in LMDS systems. Due to the strict requirements on the perpendicular geometry of the filter parts, the microfabrication research work has been concentrated on modifying the SU-8 UV-lithographical process to improve the vertical angle of sidewalls and high aspect ratio. Based on the study of the photoactive property of ultra-thick SU-8 layers, an optimized prebake time has been found for obtaining the minimum UV absorption by SU-8. The optimization principle has been tested using a series of experiments of UV-lithography on different prebake times, and proved effective. An optimized SU-8 UV-lithographical process has been developed for the fabrication of thick layer filter structures. During the test fabrication, microstructures with aspect ratio as high as 40 have been produced in 1000 mm ultra-thick SU-8 layers using the standard UV-lithography equipment. The sidewall angles are controlled between 85~90 degrees. The high quality SU-8 structures will then be used as positive moulds for producing copper structures using electroforming process. The microfabication process presented in this paper suits the proposed filter well. It also reveals a good potential for volume production of high quality RF devices.
Optimal estimation of diffusion coefficients from single-particle trajectories
NASA Astrophysics Data System (ADS)
Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik
2014-02-01
How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms
Garro, Beatriz A.; VÃ¡zquez, Roberto A.
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132
Optimization by marker removal for ?f particle simulations
NASA Astrophysics Data System (ADS)
Deng, Wenjun; Fu, Guo-Yong
2014-01-01
A marker removal optimization technique is developed for ?f particle simulations. The technique uses the linear eigenmode structure in the equilibrium constant-of-motion space to construct an importance function, then removes some markers based on the importance function and adjusts the weights of the leftover markers to optimize the marker distribution function, so as to save markers and computing time. The technique can be directly applied to single-mode linear simulations. For multi-mode or nonlinear simulations, the technique can still be directly applied if there is one most unstable mode that dominates the simulation and ?f does not change too much in the nonlinear stage, otherwise special care is needed, which is discussed in detail in this paper. The technique's effectiveness, e.g., marker saving factor, depends on how localized ?f is. The technique can be used for a phase space of arbitrary dimension, as long as the constants of motion in equilibrium can be found. In this paper, the technique is tested in a 2D bump-on-tail simulation and a 5D gyrokinetic toroidal Alfvén eigenmode (TAE) simulation and saves markers by factors of 4 and 19, respectively. The technique is not limited to particle-in-cell (PIC) simulations but could be applied to other approaches of marker particle simulations such as particle-in-wavelet (PIW) and grid-free treecode simulations.
Nanodosimetry-Based Plan Optimization for Particle Therapy
Casiraghi, Margherita; Schulte, Reinhard W.
2015-01-01
Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. PMID:26167202
Initial parameters problem of WNN based on particle swarm optimization
NASA Astrophysics Data System (ADS)
Yang, Chi-I.; Wang, Kaicheng; Chang, Kueifang
2014-04-01
The stock price prediction by the wavelet neural network is about minimizing RMSE by adjusting the parameters of initial values of network, training data percentage, and the threshold value in order to predict the fluctuation of stock price in two weeks. The objective of this dissertation is to reduce the number of parameters to be adjusted for achieving the minimization of RMSE. There are three kinds of parameters of initial value of network: w , t , and d . The optimization of these three parameters will be conducted by the Particle Swarm Optimization method, and comparison will be made with the performance of original program, proving that RMSE can be even less than the one before the optimization. It has also been shown in this dissertation that there is no need for adjusting training data percentage and threshold value for 68% of the stocks when the training data percentage is set at 10% and the threshold value is set at 0.01.
NASA Astrophysics Data System (ADS)
Boonyaritdachochai, Panida; Boonchuay, Chanwit; Ongsakul, Weerakorn
2010-06-01
This paper proposes an optimal power redispatching approach for congestion management in deregulated electricity market. Generator sensitivity is considered to indicate the redispatched generators. It can reduce the number of participating generators. The power adjustment cost and total redispatched power are minimized by particle swarm optimization with time varying acceleration coefficients (PSO-TVAC). The IEEE 30-bus and IEEE 118-bus systems are used to illustrate the proposed approach. Test results show that the proposed optimization scheme provides the lowest adjustment cost and redispatched power compared to the other schemes. The proposed approach is useful for the system operator to manage the transmission congestion.
Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.
McMinn, Brian R
2013-11-01
Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. PMID:23796954
Optimal design of a bank of spatio-temporal filters for EEG signal classification.
Higashi, Hiroshi; Tanaka, Toshihisa
2011-01-01
The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery. PMID:22255731
Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination
NASA Technical Reports Server (NTRS)
Downie, John D.
1992-01-01
Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1999-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1998-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.
Ruiz-Cruz, Riemann; Sanchez, Edgar N; Ornelas-Tellez, Fernando; Loukianov, Alexander G; Harley, Ronald G
2013-12-01
In this paper, the authors propose a particle swarm optimization (PSO) for a discrete-time inverse optimal control scheme of a doubly fed induction generator (DFIG). For the inverse optimal scheme, a control Lyapunov function (CLF) is proposed to obtain an inverse optimal control law in order to achieve trajectory tracking. A posteriori, it is established that this control law minimizes a meaningful cost function. The CLFs depend on matrix selection in order to achieve the control objectives; this matrix is determined by two mechanisms: initially, fixed parameters are proposed for this matrix by a trial-and-error method and then by using the PSO algorithm. The inverse optimal control scheme is illustrated via simulations for the DFIG, including the comparison between both mechanisms. PMID:24273145
NASA Astrophysics Data System (ADS)
Salazar, Juan P. L. C.; Collins, Lance R.
2012-08-01
In this study, we investigate the effect of "biased sampling," i.e., the clustering of inertial particles in regions of the flow with low vorticity, and "filtering," i.e., the tendency of inertial particles to attenuate the fluid velocity fluctuations, on the probability density function of inertial particle accelerations. In particular, we find that the concept of "biased filtering" introduced by Ayyalasomayajula et al. ["Modeling inertial particle acceleration statistics in isotropic turbulence," Phys. Fluids 20, 0945104 (2008), 10.1063/1.2976174], in which particles filter stronger acceleration events more than weaker ones, is relevant to the higher order moments of acceleration. Flow topology and its connection to acceleration is explored through invariants of the velocity-gradient, strain-rate, and rotation-rate tensors. A semi-quantitative analysis is performed where we assess the contribution of specific flow topologies to acceleration moments. Our findings show that the contributions of regions of high vorticity and low strain decrease significantly with Stokes number, a non-dimensional measure of particle inertia. The contribution from regions of low vorticity and high strain exhibits a peak at a Stokes number of approximately 0.2. Following the methodology of Ooi et al. ["A study of the evolution and characteristics of the invariants of the velocity-gradient tensor in isotropic turbulence," J. Fluid Mech. 381, 141 (1999), 10.1017/S0022112098003681], we compute mean conditional trajectories in planes formed by pairs of tensor invariants in time. Among the interesting findings is the existence of a stable focus in the plane formed by the second invariants of the strain-rate and rotation-rate tensors. Contradicting the results of Ooi et al., we find a stable focus in the plane formed by the second and third invariants of the strain-rate tensor for fluid tracers. We confirm, at an even higher Reynolds number, the conjecture of Collins and Keswani ["Reynolds number scaling of particle clustering in turbulent aerosols," New J. Phys. 6, 119 (2004), 10.1088/1367-2630/6/1/119] that inertial particle clustering saturates at large Reynolds numbers. The result is supported by the theory presented in Chun et al. ["Clustering of aerosol particles in isotropic turbulence," J. Fluid Mech. 536, 219 (2005), 10.1017/S0022112005004568].
Heuristic optimization of the scanning path of particle therapy beams
Pardo, J.; Donetti, M.; Bourhaleb, F.; Ansarinejad, A.; Attili, A.; Cirio, R.; Garella, M. A.; Giordanengo, S.; Givehchi, N.; La Rosa, A.; Marchetto, F.; Monaco, V.; Pecka, A.; Peroni, C.; Russo, G.; Sacchi, R.
2009-06-15
Quasidiscrete scanning is a delivery strategy for proton and ion beam therapy in which the beam is turned off when a slice is finished and a new energy must be set but not during the scanning between consecutive spots. Different scanning paths lead to different dose distributions due to the contribution of the unintended transit dose between spots. In this work an algorithm to optimize the scanning path for quasidiscrete scanned beams is presented. The classical simulated annealing algorithm is used. It is a heuristic algorithm frequently used in combinatorial optimization problems, which allows us to obtain nearly optimal solutions in acceptable running times. A study focused on the best choice of operational parameters on which the algorithm performance depends is presented. The convergence properties of the algorithm have been further improved by using the next-neighbor algorithm to generate the starting paths. Scanning paths for two clinical treatments have been optimized. The optimized paths are found to be shorter than the back-and-forth, top-to-bottom (zigzag) paths generally provided by the treatment planning systems. The gamma method has been applied to quantify the improvement achieved on the dose distribution. Results show a reduction of the transit dose when the optimized paths are used. The benefit is clear especially when the fluence per spot is low, as in the case of repainting. The minimization of the transit dose can potentially allow the use of higher beam intensities, thus decreasing the treatment time. The algorithm implemented for this work can optimize efficiently the scanning path of quasidiscrete scanned particle beams. Optimized scanning paths decrease the transit dose and lead to better dose distributions.
Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy
NASA Astrophysics Data System (ADS)
Wang, Ke; Qiu, Ping
2015-05-01
Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration.
NASA Astrophysics Data System (ADS)
Mao, Jiandong; Li, Jinxuan
2015-10-01
Particle size distribution is essential for describing direct and indirect radiation of aerosols. Because the relationship between the aerosol size distribution and optical thickness (AOT) is an ill-posed Fredholm integral equation of the first type, the traditional techniques for determining such size distributions, such as the Phillips-Twomey regularization method, are often ambiguous. Here, we use an approach based on an improved particle swarm optimization algorithm (IPSO) to retrieve aerosol size distribution. Using AOT data measured by a CE318 sun photometer in Yinchuan, we compared the aerosol size distributions retrieved using a simple genetic algorithm, a basic particle swarm optimization algorithm and the IPSO. Aerosol size distributions for different weather conditions were analyzed, including sunny, dusty and hazy conditions. Our results show that the IPSO-based inversion method retrieved aerosol size distributions under all weather conditions, showing great potential for similar size distribution inversions.
The use of an inert, radioactively labeled microsphere as a measure of particle accumulation (filtration activity) by Mulinia lateralis (Say) and Mytilus edulis L. was evaluated. Bottom sediment plus temperature and salinity of the water were varied to induce changes in filtratio...
Particle swarm optimization of ascent trajectories of multistage launch vehicles
NASA Astrophysics Data System (ADS)
Pontani, Mauro
2014-02-01
Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state and costate components, the coast duration, and the upper stage thrust duration. In addition, a simple approach is introduced and successfully applied with the purpose of satisfying exactly the path constraint related to the maximum dynamical pressure in the atmospheric phase. The basic version of the swarming technique, which is used in this research, is extremely simple and easy to program. Nevertheless, the algorithm proves to be capable of yielding the optimal rocket trajectory with a very satisfactory numerical accuracy.
Filter feeders and plankton increase particle encounter rates through flow regime control
Humphries, Stuart
2009-01-01
Collisions between particles or between particles and other objects are fundamental to many processes that we take for granted. They drive the functioning of aquatic ecosystems, the onset of rain and snow precipitation, and the manufacture of pharmaceuticals, powders and crystals. Here, I show that the traditional assumption that viscosity dominates these situations leads to consistent and large-scale underestimation of encounter rates between particles and of deposition rates on surfaces. Numerical simulations reveal that the encounter rate is Reynolds number dependent and that encounter efficiencies are consistent with the sparse experimental data. This extension of aerosol theory has great implications for understanding of selection pressure on the physiology and ecology of organisms, for example filter feeders able to gather food at rates up to 5 times higher than expected. I provide evidence that filter feeders have been strongly selected to take advantage of this flow regime and show that both the predicted peak concentration and the steady-state concentrations of plankton during blooms are ?33% of that predicted by the current models of particle encounter. Many ecological and industrial processes may be operating at substantially greater rates than currently assumed. PMID:19416879
Research of spatial high-pass filtering algorithm in particles real-time measurement system
NASA Astrophysics Data System (ADS)
Jin, Xuanhong; Dai, Shuguang; Mu, Pingan
2010-08-01
With the application development of CIMS, enterprises have the more need of the CAQ systems during the process of flexibility and automation. Based the means of computer-based vision technology, Automated Visual Inspection (AVI) is a non-contact measurement mean synthesizing the technologies such as image processing, precision measurement. The particles real-time measurement system is the system which analyzes the target image obtained by the computer vision system and gets the useful measure information. In accordance with existing prior knowledge, the user can timely take some measures to reduce the floating ash. According to the analysis of the particle images, this paper researches the image high-pass filter means, Gradient arithmetic, with characteristics of images. In order to get rid of the interference of background and enhance the edge lines of particles, it uses the two directions kernel to process the images. This Spatial high-pass filtering algorithm also helps to conduct the ensuing image processing to obtain useful information of floating ash particles.
PCDD/F formation in an iron/potassium-catalyzed diesel particle filter.
Heeb, Norbert V; Zennegg, Markus; Haag, Regula; Wichser, Adrian; Schmid, Peter; Seiler, Cornelia; Ulrich, Andrea; Honegger, Peter; Zeyer, Kerstin; Emmenegger, Lukas; Bonsack, Peter; Zimmerli, Yan; Czerwinski, Jan; Kasper, Markus; Mayer, Andreas
2013-06-18
Catalytic diesel particle filters (DPFs) have evolved to a powerful environmental technology. Several metal-based, fuel soluble catalysts, so-called fuel-borne catalysts (FBCs), were developed to catalyze soot combustion and support filter regeneration. Mainly iron- and cerium-based FBCs have been commercialized for passenger cars and heavy-duty vehicle applications. We investigated a new iron/potassium-based FBC used in combination with an uncoated silicon carbide filter and report effects on emissions of polychlorinated dibenzodioxins/furans (PCDD/Fs). The PCDD/F formation potential was assessed under best and worst case conditions, as required for filter approval under the VERT protocol. TEQ-weighted PCDD/F emissions remained low when using the Fe/K catalyst (37/7.5 Î¼g/g) with the filter and commercial, low-sulfur fuel. The addition of chlorine (10 Î¼g/g) immediately led to an intense PCDD/F formation in the Fe/K-DPF. TEQ-based emissions increased 51-fold from engine-out levels of 95 to 4800 pg I-TEQ/L after the DPF. Emissions of 2,3,7,8-TCDD, the most toxic congener (TEF = 1.0), increased 320-fold, those of 2,3,7,8-TCDF (TEF = 0.1) even 540-fold. Remarkable pattern changes were noticed, indicating a preferential formation of tetrachlorinated dibenzofurans. It has been shown that potassium acts as a structural promoter inducing the formation of magnetite (Fe3O4) rather than hematite (Fe2O3). This may alter the catalytic properties of iron. But the chemical nature of this new catalyst is yet unknown, and we are far from an established mechanism for this new pathway to PCDD/Fs. In conclusion, the iron/potassium-catalyzed DPF has a high PCDD/F formation potential, similar to the ones of copper-catalyzed filters, the latter are prohibited by Swiss legislation. PMID:23713673
Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.
Elhossini, Ahmed; Areibi, Shawki; Dony, Robert
2010-01-01
This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics. PMID:20064026
Particle swarm optimization for the clustering of wireless sensors
NASA Astrophysics Data System (ADS)
Tillett, Jason C.; Rao, Raghuveer M.; Sahin, Ferat; Rao, T. M.
2003-07-01
Clustering is necessary for data aggregation, hierarchical routing, optimizing sleep patterns, election of extremal sensors, optimizing coverage and resource allocation, reuse of frequency bands and codes, and conserving energy. Optimal clustering is typically an NP-hard problem. Solutions to NP-hard problems involve searches through vast spaces of possible solutions. Evolutionary algorithms have been applied successfully to a variety of NP-hard problems. We explore one such approach, Particle Swarm Optimization (PSO), an evolutionary programming technique where a 'swarm' of test solutions, analogous to a natural swarm of bees, ants or termites, is allowed to interact and cooperate to find the best solution to the given problem. We use the PSO approach to cluster sensors in a sensor network. The energy efficiency of our clustering in a data-aggregation type sensor network deployment is tested using a modified LEACH-C code. The PSO technique with a recursive bisection algorithm is tested against random search and simulated annealing; the PSO technique is shown to be robust. We further investigate developing a distributed version of the PSO algorithm for clustering optimally a wireless sensor network.
Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin
2014-03-01
Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. PMID:24342270
A particle filter to reconstruct a free-surface flow from a depth camera
NASA Astrophysics Data System (ADS)
CombÃ©s, Benoit; Heitz, Dominique; Guibert, Anthony; MÃ©min, Etienne
2015-10-01
We investigate the combined use of a kinect depth sensor and of a stochastic data assimilation (DA) method to recover free-surface flows. More specifically, we use a weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only. This particle filter accounts for model and observations errors. This DA scheme is enhanced with the use of two observations instead of one classically. We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow. The robustness of the method to depth data errors and also to initial and inflow conditions is considered. We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions. Then, the performance of the Kinect sensor in capturing the temporal sequences of depth observations is investigated. Finally, the efficiency of the algorithm is qualified for a wave in a real rectangular flat bottomed tank. It is shown that for basic initial conditions, the particle filter rapidly and remarkably reconstructs the velocity and height of the free surface flow based on noisy measurements of the elevation alone.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
NASA Astrophysics Data System (ADS)
Kirchstetter, T.; Preble, C.; Dallmann, T. R.; DeMartini, S. J.; Tang, N. W.; Kreisberg, N. M.; Hering, S. V.; Harley, R. A.
2013-12-01
Diesel particle filters have become widely used in the United States since the introduction in 2007 of a more stringent exhaust particulate matter emission standard for new heavy-duty diesel vehicle engines. California has instituted additional regulations requiring retrofit or replacement of older in-use engines to accelerate emission reductions and air quality improvements. This presentation summarizes pollutant emission changes measured over several field campaigns at the Port of Oakland in the San Francisco Bay Area associated with diesel particulate filter use and accelerated modernization of the heavy-duty truck fleet. Pollutants in the exhaust plumes of hundreds of heavy-duty trucks en route to the Port were measured in 2009, 2010, 2011, and 2013. Ultrafine particle number, black carbon (BC), nitrogen oxides (NOx), and nitrogen dioxide (NO2) concentrations were measured at a frequency â‰¤ 1 Hz and normalized to measured carbon dioxide concentrations to quantify fuel-based emission factors (grams of pollutant emitted per kilogram of diesel consumed). The size distribution of particles in truck exhaust plumes was also measured at 1 Hz. In the two most recent campaigns, emissions were linked on a truck-by-truck basis to installed emission control equipment via the matching of transcribed license plates to a Port truck database. Accelerated replacement of older engines with newer engines and retrofit of trucks with diesel particle filters reduced fleet-average emissions of BC and NOx. Preliminary results from the two most recent field campaigns indicate that trucks without diesel particle filters emit 4 times more BC than filter-equipped trucks. Diesel particle filters increase emissions of NO2, however, and filter-equipped trucks have NO2/NOx ratios that are 4 to 7 times greater than trucks without filters. Preliminary findings related to particle size distribution indicate that (a) most trucks emitted particles characterized by a single mode of approximately 100 nm in diameter and (b) new trucks originally equipped with diesel particle filters were 5 to 6 times more likely than filter-retrofitted trucks and trucks without filters to emit particles characterized by a single mode in the range of 10 to 30 nm in diameter.
Discrete particle swarm optimization with scout particles for library materials acquisition.
Wu, Yi-Ling; Ho, Tsu-Feng; Shyu, Shyong Jian; Lin, Bertrand M T
2013-01-01
Materials acquisition is one of the critical challenges faced by academic libraries. This paper presents an integer programming model of the studied problem by considering how to select materials in order to maximize the average preference and the budget execution rate under some practical restrictions including departmental budget, limitation of the number of materials in each category and each language. To tackle the constrained problem, we propose a discrete particle swarm optimization (DPSO) with scout particles, where each particle, represented as a binary matrix, corresponds to a candidate solution to the problem. An initialization algorithm and a penalty function are designed to cope with the constraints, and the scout particles are employed to enhance the exploration within the solution space. To demonstrate the effectiveness and efficiency of the proposed DPSO, a series of computational experiments are designed and conducted. The results are statistically analyzed, and it is evinced that the proposed DPSO is an effective approach for the studied problem. PMID:24072983
Discrete Particle Swarm Optimization with Scout Particles for Library Materials Acquisition
Lin, Bertrand M. T.
2013-01-01
Materials acquisition is one of the critical challenges faced by academic libraries. This paper presents an integer programming model of the studied problem by considering how to select materials in order to maximize the average preference and the budget execution rate under some practical restrictions including departmental budget, limitation of the number of materials in each category and each language. To tackle the constrained problem, we propose a discrete particle swarm optimization (DPSO) with scout particles, where each particle, represented as a binary matrix, corresponds to a candidate solution to the problem. An initialization algorithm and a penalty function are designed to cope with the constraints, and the scout particles are employed to enhance the exploration within the solution space. To demonstrate the effectiveness and efficiency of the proposed DPSO, a series of computational experiments are designed and conducted. The results are statistically analyzed, and it is evinced that the proposed DPSO is an effective approach for the studied problem. PMID:24072983
A Geometry-Based Particle Filtering Approach to White Matter Tractography
Savadjiev, Peter; Rathi, Yogesh; Malcolm, James G.; Shenton, Martha E.; Westin, Carl-Fredrik
2011-01-01
We introduce a fibre tractography framework based on a particle filter which estimates a local geometrical model of the underlying white matter tract, formulated as a `streamline flow' using generalized helicoids. The method is not dependent on the diffusion model, and is applicable to diffusion tensor (DT) data as well as to high angular resolution reconstructions. The geometrical model allows for a robust inference of local tract geometry, which, in the context of the causal filter estimation, guides tractography through regions with partial volume effects. We validate the method on synthetic data and present results on two types in vivo data: diffusion tensors and a spherical harmonic reconstruction of the fibre orientation distribution function (fODF). PMID:20879320
Optimal hydrograph separation filter to evaluate transport routines of hydrological models
NASA Astrophysics Data System (ADS)
Rimmer, Alon; Hartmann, Andreas
2014-05-01
Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is an attempt to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to develop a benchmark model that can be used as a geochemical model itself, or to test the performance of process based hydro-geochemical models. The benchmark model quantifies the degree of knowledge that the stream flow time series itself contributes to the hydrochemical analysis. Results of the OHS show that the two HS fractions ("rapid" and "slow") differ according to the geochemical substances which were selected. The OHS parameters were then used to demonstrate how to develop benchmark model for hydro-chemical predictions. Finally, predictions of solute transport from a process-based hydrological model were compared to the proposed benchmark model. Our results indicate that the benchmark model illustrated and quantified the contribution of the modeling procedure better than only using traditional measures like r2 or the Nash-Sutcliffe efficiency.
Constraint Web Service Composition Based on Discrete Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Fang, Xianwen; Fan, Xiaoqin; Yin, Zhixiang
Web service composition provides an open, standards-based approach for connecting web services together to create higher-level business processes. The Standards are designed to reduce the complexity required to compose web services, hence reducing time and costs, and increase overall efficiency in businesses. This paper present independent global constrains web service composition optimization methods based on Discrete Particle Swarm Optimization (DPSO) and associate Petri net (APN). Combining with the properties of APN, an efficient DPSO algorithm is presented which is used to search a legal firing sequence in the APN model. Using legal firing sequences of the Petri net makes the service composition locating space based on DPSO shrink greatly. Finally, for comparing our methods with the approximating methods, the simulation experiment is given out. Theoretical analysis and experimental results indicate that this method owns both lower computation cost and higher success ratio of service composition.
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
Rogers, Adam; Fiege, Jason D.
2011-02-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
A challenge for theranostics: is the optimal particle for therapy also optimal for diagnostics?
NASA Astrophysics Data System (ADS)
Dreifuss, Tamar; Betzer, Oshra; Shilo, Malka; Popovtzer, Aron; Motiei, Menachem; Popovtzer, Rachela
2015-09-01
Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications.Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03119b
Modified particle filtering algorithm for single acoustic vector sensor DOA tracking.
Li, Xinbo; Sun, Haixin; Jiang, Liangxu; Shi, Yaowu; Wu, Yue
2015-01-01
The conventional direction of arrival (DOA) estimation algorithm with static sources assumption usually estimates the source angles of two adjacent moments independently and the correlation of the moments is not considered. In this article, we focus on the DOA estimation of moving sources and a modified particle filtering (MPF) algorithm is proposed with state space model of single acoustic vector sensor. Although the particle filtering (PF) algorithm has been introduced for acoustic vector sensor applications, it is not suitable for the case that one dimension angle of source is estimated with large deviation, the two dimension angles (pitch angle and azimuth angle) cannot be simultaneously employed to update the state through resampling processing of PF algorithm. To solve the problems mentioned above, the MPF algorithm is proposed in which the state estimation of previous moment is introduced to the particle sampling of present moment to improve the importance function. Moreover, the independent relationship of pitch angle and azimuth angle is considered and the two dimension angles are sampled and evaluated, respectively. Then, the MUSIC spectrum function is used as the "likehood" function of the MPF algorithm, and the modified PF-MUSIC (MPF-MUSIC) algorithm is proposed to improve the root mean square error (RMSE) and the probability of convergence. The theoretical analysis and the simulation results validate the effectiveness and feasibility of the two proposed algorithms. PMID:26501280
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Improved near surface heavy impurity detection by a novel charged particle energy filter technique
Ishibashi, K.; Patnaik, B.K.; Parikh, N.R.; Tateno, H.; Hunn, J.D.
1994-12-31
As the typical feature size of silicon integrated circuits, such as in VLSI technology, has become smaller, the surface cleanliness of silicon wafers has become more important. Hence, detection of trace impurities introduced during the processing steps is essential. A novel technique, consisting of a ``Charged Particle Energy Filter (CPEF)`` used in the path of the scattered helium ions in the conventional Rutherford Backscattering geometry, is proposed and its merits and limitations are discussed. In this technique, an electric field is applied across a pair of plates placed before the detector so that backscattered particles of only a selected energy range go through slits to strike the detector. This can be used to filter out particles from the lighter substrate atoms and thus reduce pulse pileup in the region of the impurity signal. The feasibility of this scheme was studied with silicon wafers implanted with 1{times}10{sup 14} and 1{times}10{sup 13} {sup 54}Fe/cm{sup 2} at an energy of 35 keV, and a 0.5 MeV He{sup +} analysis beam. It was found that the backscattered ion signals from the Si atoms can be reduced by more than three orders of magnitude. This suggests the detection limit for contaminants can be improved by at least two orders of magnitude compared to the conventional Rutherford Backscattering technique. This technique can be incorporated in 200--300 kV ion implanters for monitoring of surface contaminants in samples prior to implantation.
Modified Particle Filtering Algorithm for Single Acoustic Vector Sensor DOA Tracking
Li, Xinbo; Sun, Haixin; Jiang, Liangxu; Shi, Yaowu; Wu, Yue
2015-01-01
The conventional direction of arrival (DOA) estimation algorithm with static sources assumption usually estimates the source angles of two adjacent moments independently and the correlation of the moments is not considered. In this article, we focus on the DOA estimation of moving sources and a modified particle filtering (MPF) algorithm is proposed with state space model of single acoustic vector sensor. Although the particle filtering (PF) algorithm has been introduced for acoustic vector sensor applications, it is not suitable for the case that one dimension angle of source is estimated with large deviation, the two dimension angles (pitch angle and azimuth angle) cannot be simultaneously employed to update the state through resampling processing of PF algorithm. To solve the problems mentioned above, the MPF algorithm is proposed in which the state estimation of previous moment is introduced to the particle sampling of present moment to improve the importance function. Moreover, the independent relationship of pitch angle and azimuth angle is considered and the two dimension angles are sampled and evaluated, respectively. Then, the MUSIC spectrum function is used as the â€œlikehoodâ€ function of the MPF algorithm, and the modified PF-MUSIC (MPF-MUSIC) algorithm is proposed to improve the root mean square error (RMSE) and the probability of convergence. The theoretical analysis and the simulation results validate the effectiveness and feasibility of the two proposed algorithms. PMID:26501280
Particle swarm optimization for radar target recognition and modeling
NASA Astrophysics Data System (ADS)
Jouny, Ismail
2008-04-01
This paper proposes a radar target identification system using down range profile radar signatures. The recognition is performed using a multi-layer perceptron trained via particle swarm optimization (PSO). The recognition results are compared with those obtained using back propagation training. The paper also uses PSO for modeling target signatures and extracting target scattering centers assuming that they can be modeled as an auto regressive moving average model. Real radar signatures of commercial aircraft are used to assess the performance of the techniques proposed. The results focus on comparing PSO based techniques with others used for target modeling and recognition.
Optimization of nanoparticle core size for magnetic particle imaging
Ferguson, Matthew R.; Minard, Kevin R.; Krishnan, Kannan M.
2009-05-01
Magnetic Particle Imaging (MPI) is a powerful new diagnostic visualization platform designed for measuring the amount and location of superparamagnetic nanoscale molecular probes (NMPs) in biological tissues. Promising initial results indicate that MPI can be extremely sensitive and fast, with good spatial resolution for imaging human patients or live animals. Here, we present modeling results that show how MPI sensitivity and spatial resolution both depend on NMP-core physical properties, and how MPI performance can be effectively optimized through rational core design. Monodisperse magnetite cores are attractive since they are readily produced with a biocompatible coating and controllable size that facilitates quantitative imaging.
Particle swarm optimization applied to automatic lens design
NASA Astrophysics Data System (ADS)
Qin, Hua
2011-06-01
This paper describes a novel application of Particle Swarm Optimization (PSO) technique to lens design. A mathematical model is constructed, and merit functions in an optical system are employed as fitness functions, which combined radiuses of curvature, thicknesses among lens surfaces and refractive indices regarding an optical system. By using this function, the aberration correction is carried out. A design example using PSO is given. Results show that PSO as optical design tools is practical and powerful, and this method is no longer dependent on the lens initial structure and can arbitrarily create search ranges of structural parameters of a lens system, which is an important step towards automatic design with artificial intelligence.
Particle Swarm Optimization with Watts-Strogatz Model
NASA Astrophysics Data System (ADS)
Zhu, Zhuanghua
Particle swarm optimization (PSO) is a popular swarm intelligent methodology by simulating the animal social behaviors. Recent study shows that this type of social behaviors is a complex system, however, for most variants of PSO, all individuals lie in a fixed topology, and conflict this natural phenomenon. Therefore, in this paper, a new variant of PSO combined with Watts-Strogatz small-world topology model, called WSPSO, is proposed. In WSPSO, the topology is changed according to Watts-Strogatz rules within the whole evolutionary process. Simulation results show the proposed algorithm is effective and efficient.
PMSM Driver Based on Hybrid Particle Swarm Optimization and CMAC
NASA Astrophysics Data System (ADS)
Tu, Ji; Cao, Shaozhong
A novel hybrid particle swarm optimization (PSO) and cerebellar model articulation controller (CMAC) is introduced to the permanent magnet synchronous motor (PMSM) driver. PSO can simulate the random learning among the individuals of population and CMAC can simulate the self-learning of an individual. To validate the ability and superiority of the novel algorithm, experiments and comparisons have been done in MATLAB/SIMULINK. Analysis among PSO, hybrid PSO-CMAC and CMAC feed-forward control is also given. The results prove that the electric torque ripple and torque disturbance of the PMSM driver can be reduced by using the hybrid PSO-CMAC algorithm.
Generating optimal initial conditions for smooth particle hydrodynamics (SPH) simulations
Diehl, Steven; Rockefeller, Gabriel M; Fryer, Christopher L
2008-01-01
We present a new optimal method to set up initial conditions for Smooth Particle Hydrodynamics Simulations, which may also be of interest for N-body simulations. This new method is based on weighted Voronoi tesselations (WVTs) and can meet arbitrarily complex spatial resolution requirements. We conduct a comprehensive review of existing SPH setup methods, and outline their advantages, limitations and drawbacks. A serial version of our WVT setup method is publicly available and we give detailed instruction on how to easily implement the new method on top of an existing parallel SPH code.
Panorama parking assistant system with improved particle swarm optimization method
NASA Astrophysics Data System (ADS)
Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong
2013-10-01
A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.
Spatial filter and feature selection optimization based on EA for multi-channel EEG.
Yubo Wang; Mohanarangam, Krithikaa; Mallipeddi, Rammohan; Veluvolu, K C
2015-08-01
The EEG signals employed for BCI systems are generally band-limited. The band-limited multiple Fourier linear combiner (BMFLC) with Kalman filter was developed to obtain amplitude estimates of the EEG signal in a pre-fixed frequency band in real-time. However, the high-dimensionality of the feature vector caused by the application of BMFLC to multi-channel EEG based BCI deteriorates the performance of the classifier. In this work, we apply evolutionary algorithm (EA) to tackle this problem. The real-valued EA encodes both the spatial filter and the feature selection into its solution and optimizes it with respect to the classification error. Three BMFLC based BCI configurations are proposed. Our results show that the BMFLC-KF with covariance matrix adaptation evolution strategy (CMAES) has the best overall performance. PMID:26736755
Spatial join optimization among WFSs based on recursive partitioning and filtering rate estimation
NASA Astrophysics Data System (ADS)
Lan, Guiwen; Wu, Congcong; Shi, Guangyi; Chen, Qi; Yang, Zhao
2015-12-01
Spatial join among Web Feature Services (WFS) is time-consuming for most of non-candidate spatial objects may be encoded by GML and transferred to client side. In this paper, an optimization strategy is proposed to enhance performance of these joins by filtering non-candidate spatial objects as many as possible. By recursive partitioning, the data skew of sub-areas is facilitated to reduce data transmission using spatial semi-join. Moreover filtering rate is used to determine whether a spatial semi-join for a sub-area is profitable and choose a suitable execution plan for it. The experimental results show that the proposed strategy is feasible under most circumstances.
Reducing nonlinear waveform distortion in IM/DD systems by optimized receiver filtering
NASA Astrophysics Data System (ADS)
Zhou, Y. R.; Watkins, L. R.
1994-09-01
Nonlinear waveform distortion caused by the combined effect of fiber chromatic dispersion, self-phase modulation, and amplifier noise limits the attainable performance of high bit-rate, long haul optically repeatered systems. Signal processing in the receiver is investigated and found to be effective in reducing the penalty caused by this distortion. Third order low pass filters, with and without a tapped delay line equalizer are considered. The pole locations or the tap weights are optimized with respect to a minimum bit error rate criterion which accommodates distortion, pattern effects, decision time, threshold setting and noise contributions. The combination of a third order Butterworth filter and a five-tap, fractionally spaced equalizer offers more than 4 dB benefit at 4000 km compared with conventional signal processing designs.
Characterization of exhaled breath particles collected by an electret filter technique.
Tinglev, Ã…sa Danielsson; Ullah, Shahid; Ljungkvist, GÃ¶ran; Viklund, Emilia; Olin, Anna-Carin; Beck, Olof
2016-01-01
Aerosol particles that are present in exhaled breath carry nonvolatile components and have gained interest as a specimen for potential biomarkers. Nonvolatile compounds detected in exhaled breath include both endogenous and exogenous compounds. The aim of this study was to study particles collected with a new, simple and convenient filter technique. Samples of breath were collected from healthy volunteers from approximately 30â€‰l of exhaled air. Particles were counted with an optical particle counter and two phosphatidylcholines were measured by liquid chromatography-tandem mass spectrometry. In addition, phosphatidylcholines and methadone was analysed in breath from patients in treatment with methadone and oral fluid was collected with the Quantisal device. The results demonstrated that the majority of particles areâ€‰â€‰<1 Î¼m in size and that the fraction of larger particle contributes most to the total mass. The phosphatidylcholine PC(16 : 0/16 : 0) dominated over PC(16 : 0/18 : 1) and represented a major constituent of the particles. The concentration of the PC(16 : 0/16 : 0) homolog was significantly correlated (pâ€‰â€‰<â€‰â€‰0.001) with total mass. From the low concentration of the two phosphatidylcholines and their relative abundance in oral fluid a major contribution from the oral cavity could be ruled out. The concentration of PC(16 : 0/16 : 0) in breath was positively correlated with age (pâ€‰â€‰<â€‰â€‰0.01). An attempt to use PC(16 : 0/16 : 0) as a sample size indicator for methadone was not successful, as the large intra-individual variability between samplings even increased after normalization. In conclusion, it was demonstrated that exhaled breath sampled with the filter device represents a specimen corresponding to surfactant. The possible use of PC(16 : 0/16 : 0) as a sample size indicator was supported and deserves further investigations. We propose that the direct and selective collection of the breath aerosol particles is a promising strategy for measurement of nonvolatiles in breath. PMID:26987381
NASA Astrophysics Data System (ADS)
Somasundaram, P.; Muthuselvan, N. B.
This paper presents new computationally efficient improved Particle Swarm algorithms for solving Security Constrained Optimal Power Flow (SCOPF) in power systems with the inclusion of FACTS devices. The proposed algorithms are developed based on the combined application of Gaussian and Cauchy Probability distribution functions incorporated in Particle Swarm Optimization (PSO). The power flow algorithm with the presence of Static Var Compensator (SVC) Thyristor Controlled Series Capacitor (TCSC) and Unified Power Flow Controller (UPFC), has been formulated and solved. The proposed algorithms are tested on standard IEEE 30-bus system. The analysis using PSO and modified PSO reveals that the proposed algorithms are relatively simple, efficient, reliable and suitable for real-time applications. And these algorithms can provide accurate solution with fast convergence and have the potential to be applied to other power engineering problems.
OPTIMIZATION OF COAL PARTICLE FLOW PATTERNS IN LOW NOX BURNERS
Jost O.L. Wendt; Gregory E. Ogden; Jennifer Sinclair; Stephanus Budilarto
2001-08-20
The proposed research is directed at evaluating the effect of flame aerodynamics on NO{sub x} emissions from coal fired burners in a systematic manner. This fundamental research includes both experimental and modeling efforts being performed at the University of Arizona in collaboration with Purdue University. The objective of this effort is to develop rational design tools for optimizing low NO{sub x} burners to the kinetic emissions limit (below 0.2 lb./MMBTU). Experimental studies include both cold and hot flow evaluations of the following parameters: flame holder geometry, secondary air swirl, primary and secondary inlet air velocity, coal concentration in the primary air and coal particle size distribution. Hot flow experiments will also evaluate the effect of wall temperature on burner performance. Cold flow studies will be conducted with surrogate particles as well as pulverized coal. The cold flow furnace will be similar in size and geometry to the hot-flow furnace but will be designed to use a laser Doppler velocimeter/phase Doppler particle size analyzer. The results of these studies will be used to predict particle trajectories in the hot-flow furnace as well as to estimate the effect of flame holder geometry on furnace flow field. The hot-flow experiments will be conducted in a novel near-flame down-flow pulverized coal furnace. The furnace will be equipped with externally heated walls. Both reactors will be sized to minimize wall effects on particle flow fields. The cold-flow results will be compared with Fluent computation fluid dynamics model predictions and correlated with the hot-flow results with the overall goal of providing insight for novel low NO{sub x} burner geometry's.
GPU-Based Asynchronous Global Optimization with Particle Swarm
NASA Astrophysics Data System (ADS)
Wachowiak, M. P.; Lambe Foster, A. E.
2012-10-01
The recent upsurge in research into general-purpose applications for graphics processing units (GPUs) has made low cost high-performance computing increasingly more accessible. Many global optimization algorithms that have previously benefited from parallel computation are now poised to take advantage of general-purpose GPU computing as well. In this paper, a global parallel asynchronous particle swarm optimization (PSO) approach is employed to solve three relatively complex, realistic parameter estimation problems in which each processor performs significant computation. Although PSO is readily parallelizable, memory bandwidth limitations with GPUs must be addressed, which is accomplished by minimizing communication among individual population members though asynchronous operations. The effect of asynchronous PSO on robustness and efficiency is assessed as a function of problem and population size. Experiments were performed with different population sizes on NVIDIA GPUs and on single-core CPUs. Results for successful trials exhibit marked speedup increases with the population size, indicating that more particles may be used to improve algorithm robustness while maintaining nearly constant time. This work also suggests that asynchronous operations on the GPU may be viable in stochastic population-based algorithms to increase efficiency without sacrificing the quality of the solutions.
Evaluation of a particle swarm algorithm for biomechanical optimization.
Schutte, Jaco F; Koh, Byung-Il; Reinbolt, Jeffrey A; Haftka, Raphael T; George, Alan D; Fregly, Benjamin J
2005-06-01
Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently-developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm's global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms--a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units. PMID:16060353
Han, Shuxin; Yue, Qinyan; Yue, Min; Gao, Baoyu; Li, Qian; Yu, Hui; Zhao, Yaqin; Qi, Yuanfeng
2009-11-15
Novel filter media-sludge-fly ash ceramic particles (SFCP) were prepared using dewatered sludge, fly ash and clay with a mass ratio of 1:1:1. Compared with commercial ceramic particles (CCP), SFCP had higher total porosity, larger total surface area and lower bulk and apparent density. Tests of heavy metal elements in lixivium proved that SFCP were safe for wastewater treatment. A lab-scale upflow anaerobic bioreactor was employed to ascertain the application of SFCP in denitrification process using acetate as carbon source. The results showed that SFCP reactor brought a relative superiority to CCP reactor in terms of total nitrogen (TN) removal at the optimum C/N ratio of 4.03 when volumetric loading rates (VLR) ranged from 0.33 to 3.69 kg TN (m(3)d)(-1). Therefore, SFCP application, as a novel process of treating wastes with wastes, provided a promising way in sludge and fly ash utilization. PMID:19608336
Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.
2015-07-01
The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.
Figueredo-Cardero, Alvio; Chico, Ernesto; Castilho, Leda; de Andrade Medronho, Ricardo
2012-01-01
In the present work, the main fluid flow features inside a rotating cylindrical filtration (RCF) system used as external cell retention device for animal cell perfusion processes were investigated using particle image velocimetry (PIV). The motivation behind this work was to provide experimental fluid dynamic data for such turbulent flow using a high-permeability filter, given the lack of information about this system in the literature. The results shown herein gave evidence that, at the boundary between the filter mesh and the fluid, a slip velocity condition in the tangential direction does exist, which had not been reported in the literature so far. In the RCF system tested, this accounted for a fluid velocity 10% lower than that of the filter tip, which could be important for the cake formation kinetics during filtration. Evidence confirming the existence of Taylor vortices under conditions of turbulent flow and high permeability, typical of animal cell perfusion RCF systems, was obtained. Second-order turbulence statistics were successfully calculated. The radial behavior of the second-order turbulent moments revealed that turbulence in this system is highly anisotropic, which is relevant for performing numerical simulations of this system. PMID:22915477
Heeb, Norbert V; Rey, Maria Dolores; Zennegg, Markus; Haag, Regula; Wichser, Adrian; Schmid, Peter; Seiler, Cornelia; Honegger, Peter; Zeyer, Kerstin; Mohn, Joachim; BÃ¼rki, Samuel; Zimmerli, Yan; Czerwinski, Jan; Mayer, Andreas
2015-08-01
Iron-catalyzed diesel particle filters (DPFs) are widely used for particle abatement. Active catalyst particles, so-called fuel-borne catalysts (FBCs), are formed in situ, in the engine, when combusting precursors, which were premixed with the fuel. The obtained iron oxide particles catalyze soot oxidation in filters. Iron-catalyzed DPFs are considered as safe with respect to their potential to form polychlorinated dibenzodioxins/furans (PCDD/Fs). We reported that a bimetallic potassium/iron FBC supported an intense PCDD/F formation in a DPF. Here, we discuss the impact of fatty acid methyl ester (FAME) biofuel on PCDD/F emissions. The iron-catalyzed DPF indeed supported a PCDD/F formation with biofuel but remained inactive with petroleum-derived diesel fuel. PCDD/F emissions (I-TEQ) increased 23-fold when comparing biofuel and diesel data. Emissions of 2,3,7,8-TCDD, the most toxic congener [toxicity equivalence factor (TEF) = 1.0], increased 90-fold, and those of 2,3,7,8-TCDF (TEF = 0.1) increased 170-fold. Congener patterns also changed, indicating a preferential formation of tetra- and penta-chlorodibenzofurans. Thus, an inactive iron-catalyzed DPF becomes active, supporting a PCDD/F formation, when operated with biofuel containing impurities of potassium. Alkali metals are inherent constituents of biofuels. According to the current European Union (EU) legislation, levels of 5 Î¼g/g are accepted. We conclude that risks for a secondary PCDD/F formation in iron-catalyzed DPFs increase when combusting potassium-containing biofuels. PMID:26176879
?2 testing of optimal filters for gravitational wave signals: An experimental implementation
NASA Astrophysics Data System (ADS)
Baggio, L.; Cerdonio, M.; Ortolan, A.; Vedovato, G.; Taffarello, L.; Zendri, J.-P.; Bonaldi, M.; Falferi, P.; Martinucci, V.; Mezzena, R.; Prodi, G. A.; Vitale, S.
2000-05-01
We have implemented likelihood testing of the performance of an optimal filter within the online analysis of AURIGA, a sub-Kelvin resonant-bar gravitational wave detector. We demonstrate the effectiveness of this technique in discriminating between impulsive mechanical excitations of the resonant-bar and other spurious excitations. This technique also ensures the accuracy of the estimated parameters such as the signal-to-noise ratio. The efficiency of the technique to deal with nonstationary noise and its application to data from a network of detectors are also discussed.
Application of Particle Swarm Optimization in Computer Aided Setup Planning
NASA Astrophysics Data System (ADS)
Kafashi, Sajad; Shakeri, Mohsen; Abedini, Vahid
2011-01-01
New researches are trying to integrate computer aided design (CAD) and computer aided manufacturing (CAM) environments. The role of process planning is to convert the design specification into manufacturing instructions. Setup planning has a basic role in computer aided process planning (CAPP) and significantly affects the overall cost and quality of machined part. This research focuses on the development for automatic generation of setups and finding the best setup plan in feasible condition. In order to computerize the setup planning process, three major steps are performed in the proposed system: a) Extraction of machining data of the part. b) Analyzing and generation of all possible setups c) Optimization to reach the best setup plan based on cost functions. Considering workshop resources such as machine tool, cutter and fixture, all feasible setups could be generated. Then the problem is adopted with technological constraints such as TAD (tool approach direction), tolerance relationship and feature precedence relationship to have a completely real and practical approach. The optimal setup plan is the result of applying the PSO (particle swarm optimization) algorithm into the system using cost functions. A real sample part is illustrated to demonstrate the performance and productivity of the system.
Particle Swarm Optimization with Scale-Free Interactions
Liu, Chen; Du, Wen-Bo; Wang, Wen-Xu
2014-01-01
The particle swarm optimization (PSO) algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks. PMID:24859007
Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we me...
NASA Astrophysics Data System (ADS)
Pek?en, Ertan; Yas, Türker; K?yak, Alper
2014-09-01
We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.
Improving Hydrologic Data Assimilation by a Multivariate Particle Filter-Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Yan, H.; DeChant, C. M.; Moradkhani, H.
2014-12-01
Data assimilation (DA) is a popular method for merging information from multiple sources (i.e. models and remotely sensing), leading to improved hydrologic prediction. With the increasing availability of satellite observations (such as soil moisture) in recent years, DA is emerging in operational forecast systems. Although these techniques have seen widespread application, developmental research has continued to further refine their effectiveness. This presentation will examine potential improvements to the Particle Filter (PF) through the inclusion of multivariate correlation structures. Applications of the PF typically rely on univariate DA schemes (such as assimilating the outlet observed discharge), and multivariate schemes generally ignore the spatial correlation of the observations. In this study, a multivariate DA scheme is proposed by introducing geostatistics into the newly developed particle filter with Markov chain Monte Carlo (PF-MCMC) method. This new method is assessed by a case study over one of the basin with natural hydrologic process in Model Parameter Estimation Experiment (MOPEX), located in Arizona. The multivariate PF-MCMC method is used to assimilate the Advanced Scatterometer (ASCAT) grid (12.5 km) soil moisture retrievals and the observed streamflow in five gages (four inlet and one outlet gages) into the Sacramento Soil Moisture Accounting (SAC-SMA) model for the same scale (12.5 km), leading to greater skill in hydrologic predictions.
Streamflow data assimilation for the mesoscale hydrologic model (mHM) using particle filtering
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis; Choi, Shin-woo
2015-04-01
Data assimilation has been becoming popular to increase the certainty of the hydrologic prediction considering various sources of uncertainty through the hydrologic modeling chain. In this study, we develop a data assimilation framework for the mesoscale hydrologic model (mHM 5.2, http://www.ufz.de/mhm) using particle filtering, which is a sequential DA method for non-linear and non-Gaussian models. The mHM is a grid based distributed model that is based on numerical approximations of dominant hydrologic processes having similarity with the HBV and VIC models. The developed DA framework for the mHM represents simulation uncertainty by model ensembles and updates spatial distributions of model state variables when new observations are available in each updating time interval. The evaluation of the proposed method is carried out within several large European basins via assimilating multiple streamflow measurements in a daily interval. Dimensional limitations of particle filtering is resolved by effective noise specification methods, which uses spatial and temporal correlation of weather forcing data to represent model structural uncertainty. The presentation will be focused on gains and limitations of streamflow data assimilation in several hindcasting experiments. In addition, impacts of non-Gaussian distributions of state variables on model performance will be discussed.
Assimilation of microwave brightness temperatures for soil moisture estimation using particle filter
NASA Astrophysics Data System (ADS)
Bi, H. Y.; Ma, J. W.; Qin, S. X.; Zeng, J. Y.
2014-03-01
Soil moisture plays a significant role in global water cycles. Both model simulations and remote sensing observations have their limitations when estimating soil moisture on a large spatial scale. Data assimilation (DA) is a promising tool which can combine model dynamics and remote sensing observations to obtain more precise ground soil moisture distribution. Among various DA methods, the particle filter (PF) can be applied to non-linear and non-Gaussian systems, thus holding great potential for DA. In this study, a data assimilation scheme based on the residual resampling particle filter (RR-PF) was developed to assimilate microwave brightness temperatures into the macro-scale semi-distributed Variance Infiltration Capacity (VIC) Model to estimate surface soil moisture. A radiative transfer model (RTM) was used to link brightness temperatures with surface soil moisture. Finally, the data assimilation scheme was validated by experimental data obtained at Arizona during the Soil Moisture Experiment 2004 (SMEX04). The results show that the estimation accuracy of soil moisture can be improved significantly by RR-PF through assimilating microwave brightness temperatures into VIC model. Both the overall trends and specific values of the assimilation results are more consistent with ground observations compared with model simulation results.
Ma, Huan; Shen, Henggen; Shui, Tiantian; Li, Qing; Zhou, Liuke
2016-01-01
Size- and time-dependent aerodynamic behaviors of indoor particles, including PM1.0, were evaluated in a school office in order to test the performance of air-cleaning devices using different filters. In-situ real-time measurements were taken using an optical particle counter. The filtration characteristics of filter media, including single-pass efficiency, volume and effectiveness, were evaluated and analyzed. The electret filter (EE) medium shows better initial removal efficiency than the high efficiency (HE) medium in the 0.3-3.5 ?m particle size range, while under the same face velocity, the filtration resistance of the HE medium is several times higher than that of the EE medium. During service life testing, the efficiency of the EE medium decreased to 60% with a total purifying air flow of 25 × 10? m³/m². The resistance curve rose slightly before the efficiency reached the bottom, and then increased almost exponentially. The single-pass efficiency of portable air cleaner (PAC) with the pre-filter (PR) or the active carbon granule filter (CF) was relatively poor. While PAC with the pre-filter and the high efficiency filter (PR&HE) showed maximum single-pass efficiency for PM1.0 (88.6%), PAC with the HE was the most effective at removing PM1.0. The enhancement of PR with HE and electret filters augmented the single-pass efficiency, but lessened the airflow rate and effectiveness. Combined with PR, the decay constant of large-sized particles could be greater than for PACs without PR. Without regard to the lifetime, the electret filters performed better with respect to resource saving and purification improvement. A most penetrating particle size range (MPPS: 0.4-0.65 ?m) exists in both HE and electret filters; the MPPS tends to become larger after HE and electret filters are combined with PR. These results serve to provide a better understanding of the indoor particle removal performance of PACs when combined with different kinds of filters in school office buildings. PMID:26742055
Ma, Huan; Shen, Henggen; Shui, Tiantian; Li, Qing; Zhou, Liuke
2016-01-01
Size- and time-dependent aerodynamic behaviors of indoor particles, including PM1.0, were evaluated in a school office in order to test the performance of air-cleaning devices using different filters. In-situ real-time measurements were taken using an optical particle counter. The filtration characteristics of filter media, including single-pass efficiency, volume and effectiveness, were evaluated and analyzed. The electret filter (EE) medium shows better initial removal efficiency than the high efficiency (HE) medium in the 0.3â€“3.5 Î¼m particle size range, while under the same face velocity, the filtration resistance of the HE medium is several times higher than that of the EE medium. During service life testing, the efficiency of the EE medium decreased to 60% with a total purifying air flow of 25 Ã— 104 m3/m2. The resistance curve rose slightly before the efficiency reached the bottom, and then increased almost exponentially. The single-pass efficiency of portable air cleaner (PAC) with the pre-filter (PR) or the active carbon granule filter (CF) was relatively poor. While PAC with the pre-filter and the high efficiency filter (PR&HE) showed maximum single-pass efficiency for PM1.0 (88.6%), PAC with the HE was the most effective at removing PM1.0. The enhancement of PR with HE and electret filters augmented the single-pass efficiency, but lessened the airflow rate and effectiveness. Combined with PR, the decay constant of large-sized particles could be greater than for PACs without PR. Without regard to the lifetime, the electret filters performed better with respect to resource saving and purification improvement. A most penetrating particle size range (MPPS: 0.4â€“0.65 Î¼m) exists in both HE and electret filters; the MPPS tends to become larger after HE and electret filters are combined with PR. These results serve to provide a better understanding of the indoor particle removal performance of PACs when combined with different kinds of filters in school office buildings. PMID:26742055
McWhinney, Robert D; Badali, Kaitlin; Liggio, John; Li, Shao-Meng; Abbatt, Jonathan P D
2013-04-01
The redox activity of diesel exhaust particles (DEP) collected from a light-duty diesel passenger car engine was examined using the dithiothreitol (DTT) assay. DEP was highly redox-active, causing DTT to decay at a rate of 23-61 pmol min(-1) Î¼g(-1) of particle used in the assay, which was an order of magnitude higher than ambient coarse and fine particulate matter (PM) collected from downtown Toronto. Only 2-11% of the redox activity was in the water-soluble portion, while the remainder occurred at the black carbon surface. This is in contrast to redox-active secondary organic aerosol constituents, in which upward of 90% of the activity occurs in the water-soluble fraction. The redox activity of DEP is not extractable by moderately polar (methanol) and nonpolar (dichloromethane) organic solvents, and is hypothesized to arise from redox-active moieties contiguous with the black carbon portion of the particles. These measurements illustrate that "Filterable Redox Cycling Activity" may therefore be useful to distinguish black carbon-based oxidative capacity from water-soluble organic-based activity. The difference in chemical environment leading to redox activity highlights the need to further examine the relationship between activity in the DTT assay and toxicology measurements across particles of different origins and composition. PMID:23470039
NASA Astrophysics Data System (ADS)
Colecchia, Federico
2014-03-01
Low-energy strong interactions are a major source of background at hadron colliders, and methods of subtracting the associated energy flow are well established in the field. Traditional approaches treat the contamination as diffuse, and estimate background energy levels either by averaging over large data sets or by restricting to given kinematic regions inside individual collision events. On the other hand, more recent techniques take into account the discrete nature of background, most notably by exploiting the presence of substructure inside hard jets, i.e. inside collections of particles originating from scattered hard quarks and gluons. However, none of the existing methods subtract background at the level of individual particles inside events. We illustrate the use of an algorithm that will allow particle-by-particle background discrimination at the Large Hadron Collider, and we envisage this as the basis for a novel event filtering procedure upstream of the official reconstruction chains. Our hope is that this new technique will improve physics analysis when used in combination with state-of-the-art algorithms in high-luminosity hadron collider environments.
Modified patch-based locally optimal Wiener method for interferometric SAR phase filtering
NASA Astrophysics Data System (ADS)
Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing
2016-04-01
This paper presents a modified patch-based locally optimal Wiener (PLOW) method for interferometric synthetic aperture radar (InSAR) phase filtering. PLOW is a linear minimum mean squared error (LMMSE) estimator based on a Gaussian additive noise condition. It jointly estimates moments, including mean and covariance, using a non-local technique. By using similarities between image patches, this method can effectively filter noise while preserving details. When applied to InSAR phase filtering, three modifications are proposed based on spatial variant noise. First, pixels are adaptively clustered according to their coherence magnitudes. Second, rather than a global estimator, a locally adaptive estimator is used to estimate noise covariance. Third, using the coherence magnitudes as weights, the mean of each cluster is estimated, using a weighted mean to further reduce noise. The performance of the proposed method is experimentally verified using simulated and real data. The results of our study demonstrate that the proposed method is on par or better than the non-local interferometric SAR (NL-InSAR) method.
Rudell, B.; Blomberg, A.; Helleday, R.; Ledin, M. C.; Lundback, B.; Stjernberg, N.; Horstedt, P.; Sandstrom, T.
1999-01-01
OBJECTIVES: Air pollution particulates have been identified as having adverse effects on respiratory health. The present study was undertaken to further clarify the effects of diesel exhaust on bronchoalveolar cells and soluble components in normal healthy subjects. The study was also designed to evaluate whether a ceramic particle trap at the end of the tail pipe, from an idling engine, would reduce indices of airway inflammation. METHODS: The study comprised three exposures in all 10 healthy never smoking subjects; air, diluted diesel exhaust, and diluted diesel exhaust filtered with a ceramic particle trap. The exposures were given for 1 hour in randomised order about 3 weeks apart. The diesel exhaust exposure apperatus has previously been carefully developed and evaluated. Bronchoalveolar lavage was performed 24 hours after exposures and the lavage fluids from the bronchial and bronchoalveolar region were analysed for cells and soluble components. RESULTS: The particle trap reduced the mean steady state number of particles by 50%, but the concentrations of the other measured compounds were almost unchanged. It was found that diesel exhaust caused an increase in neutrophils in airway lavage, together with an adverse influence on the phagocytosis by alveolar macrophages in vitro. Furthermore, the diesel exhaust was found to be able to induce a migration of alveolar macrophages into the airspaces, together with reduction in CD3+CD25+ cells. (CD = cluster of differentiation) The use of the specific ceramic particle trap at the end of the tail pipe was not sufficient to completely abolish these effects when interacting with the exhaust from an idling vehicle. CONCLUSIONS: The current study showed that exposure to diesel exhaust may induce neutrophil and alveolar macrophage recruitment into the airways and suppress alveolar macrophage function. The particle trap did not cause significant reduction of effects induced by diesel exhaust compared with unfiltered diesel exhaust. Further studies are warranted to evaluate more efficient treatment devices to reduce adverse reactions to diesel exhaust in the airways. PMID:10492649
NASA Astrophysics Data System (ADS)
Yavari, S.; Zoej, M. J. V.; Mokhtarzade, M.; Mohammadzadeh, A.
2012-07-01
Rational Function Models (RFM) are one of the most considerable approaches for spatial information extraction from satellite images especially where there is no access to the sensor parameters. As there is no physical meaning for the terms of RFM, in the conventional solution all the terms are involved in the computational process which causes over-parameterization errors. Thus in this paper, advanced optimization algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are investigated to determine the optimal terms of RFM. As the optimization would reduce the number of required RFM terms, the possibility of using fewer numbers of Ground Control Points (GCPs) in the solution comparing to the conventional method is inspected. The results proved that both GA and PSO are able to determine the optimal terms of RFM to achieve rather the same accuracy. However, PSO shows to be more effective from computational time part of view. The other important achievement is that the algorithms are able to solve the RFM using less GCPs with higher accuracy in comparison to conventional RFM.
An Accelerated Particle Swarm Optimization Algorithm on Parametric Optimization of WEDM of Die-Steel
NASA Astrophysics Data System (ADS)
Muthukumar, V.; Suresh Babu, A.; Venkatasamy, R.; Senthil Kumar, N.
2015-01-01
This study employed Accelerated Particle Swarm Optimization (APSO) algorithm to optimize the machining parameters that lead to a maximum Material Removal Rate (MRR), minimum surface roughness and minimum kerf width values for Wire Electrical Discharge Machining (WEDM) of AISI D3 die-steel. Four machining parameters that are optimized using APSO algorithm include Pulse on-time, Pulse off-time, Gap voltage, Wire feed. The machining parameters are evaluated by Taguchi's L9 Orthogonal Array (OA). Experiments are conducted on a CNC WEDM and output responses such as material removal rate, surface roughness and kerf width are determined. The empirical relationship between control factors and output responses are established by using linear regression models using Minitab software. Finally, APSO algorithm, a nature inspired metaheuristic technique, is used to optimize the WEDM machining parameters for higher material removal rate and lower kerf width with surface roughness as constraint. The confirmation experiments carried out with the optimum conditions show that the proposed algorithm was found to be potential in finding numerous optimal input machining parameters which can fulfill wide requirements of a process engineer working in WEDM industry.
Towards Optimal Filtering on ARM for ATLAS Tile Calorimeter Front-End Processing
NASA Astrophysics Data System (ADS)
Cox, Mitchell A.
2015-10-01
The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. Advanced and characteristically expensive Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are currently used to process this quantity of data. It is proposed that a cost- effective, high data throughput Processing Unit (PU) can be developed by using several ARM System on Chips in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. ARM is a cost effective and energy efficient alternative CPU architecture to the long established x86 architecture. This PU could be used for a variety of high-level algorithms on the high data throughput raw data. An Optimal Filtering algorithm has been implemented in C++ and several ARM platforms have been tested. Optimal Filtering is currently used in the ATLAS Tile Calorimeter front-end for basic energy reconstruction and is currently implemented on DSPs.
Optimal hydrograph separation filter to evaluate transport routines of hydrological models
NASA Astrophysics Data System (ADS)
Rimmer, Alon; Hartmann, Andreas
2014-06-01
Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to benchmark the performance of process-based hydro-geochemical (HG) models. The new HG routine can be used to quantify the degree of knowledge that the stream flow time series itself contributes to the HG analysis, using newly developed benchmark geochemistry efficiency (BGE). Results of the OHS show that the two HS fractions (â€œrapidâ€ and â€œslowâ€) differ according to the HG substances which were selected. The BFImax parameter (long-term ratio of baseflow to total streamflow) ranged from 0.26 to 0.94 for SO4-2 and total suspended solids, TSS, respectively. Then, predictions of SO4-2 transport from a process-based hydrological model were benchmarked with the proposed HG routine, in order to evaluate the significance of the HG routines in the process-based model. This comparison provides valuable quality test that would not be obvious when using the traditional measures like r2 or the NSE (Nash-Sutcliffe efficiency). The process-based model resulted in r2 = 0.65 and NSE = 0.65, while the benchmark routine results were slightly lower with r2 = 0.61 and NSE = 0.58. However, the comparison between the two model resulted in obvious advantage for the process-based model with BGE = 0.15.
Optimization of nanoparticle core size for magnetic particle imaging
NASA Astrophysics Data System (ADS)
Ferguson, R. Matthew; Minard, Kevin R.; Krishnan, Kannan M.
2009-05-01
Magnetic particle imaging (MPI) is a powerful new research and diagnostic imaging platform that is designed to image the amount and location of superparamagnetic nanoparticles in biological tissue. Here, we present mathematical modeling results that show how MPI sensitivity and spatial resolution both depend on the size of the nanoparticle core and its other physical properties, and how imaging performance can be effectively optimized through rational core design. Modeling is performed using the properties of magnetite cores, since these are readily produced with a controllable size that facilitates quantitative imaging. Results show that very low detection thresholds (of a few nanograms Fe 3O 4) and sub-millimeter spatial resolution are possible with MPI.
Particle Swarm Optimization Approach in a Consignment Inventory System
NASA Astrophysics Data System (ADS)
Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman
2009-09-01
Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.
Kuehn, T.H.; Yang, C.H.; Kulp, R.N.
1998-08-01
The purpose of the present study is to investigate the effect of fan cycling on two types of bag filters. Total particle concentrations and viable bioaerosol concentrations were measured upstream and downstream of the filters.
Adaptive Resampling Particle Filters for GPS Carrier-Phase Navigation and Collision Avoidance System
NASA Astrophysics Data System (ADS)
Hwang, Soon Sik
This dissertation addresses three problems: 1) adaptive resampling technique (ART) for Particle Filters, 2) precise relative positioning using Global Positioning System (GPS) Carrier-Phase (CP) measurements applied to nonlinear integer resolution problem for GPS CP navigation using Particle Filters, and 3) collision detection system based on GPS CP broadcasts. First, Monte Carlo filters, called Particle Filters (PF), are widely used where the system is non-linear and non-Gaussian. In real-time applications, their estimation accuracies and efficiencies are significantly affected by the number of particles and the scheduling of relocating weights and samples, the so-called resampling step. In this dissertation, the appropriate number of particles is estimated adaptively such that the error of the sample mean and variance stay in bounds. These bounds are given by the confidence interval of a normal probability distribution for a multi-variate state. Two required number of samples maintaining the mean and variance error within the bounds are derived. The time of resampling is determined when the required sample number for the variance error crosses the required sample number for the mean error. Second, the PF using GPS CP measurements with adaptive resampling is applied to precise relative navigation between two GPS antennas. In order to make use of CP measurements for navigation, the unknown number of cycles between GPS antennas, the so called integer ambiguity, should be resolved. The PF is applied to this integer ambiguity resolution problem where the relative navigation states estimation involves nonlinear observations and nonlinear dynamics equation. Using the PF, the probability density function of the states is estimated by sampling from the position and velocity space and the integer ambiguities are resolved without using the usual hypothesis tests to search for the integer ambiguity. The ART manages the number of position samples and the frequency of the resampling step for real-time kinematics GPS navigation. The experimental results demonstrate the performance of the ART and the insensitivity of the proposed approach to GPS CP cycle-slips. Third, the GPS has great potential for the development of new collision avoidance systems and is being considered for the next generation Traffic alert and Collision Avoidance System (TCAS). The current TCAS equipment, is capable of broadcasting GPS code information to nearby airplanes, and also, the collision avoidance system using the navigation information based on GPS code has been studied by researchers. In this dissertation, the aircraft collision detection system using GPS CP information is addressed. The PF with position samples is employed for the CP based relative position estimation problem and the same algorithm can be used to determine the vehicle attitude if multiple GPS antennas are used. For a reliable and enhanced collision avoidance system, three dimensional trajectories are projected using the estimates of the relative position, velocity, and the attitude. It is shown that the performance of GPS CP based collision detecting algorithm meets the accuracy requirements for a precise approach of flight for auto landing with significantly less unnecessary collision false alarms and no miss alarms.
Optimizing magnetite nanoparticles for mass sensitivity in magnetic particle imaging
Ferguson, R. Matthew; Minard, Kevin R.; Khandhar, Amit P.; Krishnan, Kannan M.
2011-01-01
Purpose: Magnetic particle imaging (MPI), using magnetite nanoparticles (MNPs) as tracer material, shows great promise as a platform for fast tomographic imaging. To date, the magnetic properties of MNPs used in imaging have not been optimized. As nanoparticle magnetism shows strong size dependence, the authors explore how varying MNP size impacts imaging performance in order to determine optimal MNP characteristics for MPI at any driving field frequency f0. Methods: Monodisperse MNPs of varying size were synthesized and their magnetic properties characterized. Their MPI response was measured experimentally using a custom-built MPI transceiver designed to detect the third harmonic of MNP magnetization. The driving field amplitude H0=6 mT ?0?1 and frequency f0=250 kHz were chosen to be suitable for imaging small animals. Experimental results were interpreted using a model of dynamic MNP magnetization that is based on the Langevin theory of superparamagnetism and accounts for sample size distribution and size-dependent magnetic relaxation. Results: The experimental results show a clear variation in the MPI signal intensity as a function of MNP diameter that is in agreement with simulated results. A maximum in the plot of MPI signal vs MNP size indicates there is a particular size that is optimal for the chosen f0. Conclusions: The authors observed that MNPs 15 nm in diameter generate maximum signal amplitude in MPI experiments at 250 kHz. The authors expect the physical basis for this result, the change in magnetic relaxation with MNP size, will impact MPI under other experimental conditions. PMID:21520874
Lam, Christopher O; Finlay, W H
2009-10-01
Fiber aerosols tend to align parallel to surrounding fluid streamlines in shear flows, making their filtration more difficult. However, previous research indicates that composite particles made from cromoglycic acid fibers coated with small nanoscaled magnetite particles can align with an applied magnetic field. The present research explored the effect of magnetically aligning these fibers to increase their filtration. Nylon net filters were challenged with the aerosol fibers, and efficiency tests were performed with and without a magnetic field applied perpendicular to the flow direction. We investigated the effects of varying face velocities, the amount of magnetite material on the aerosol particles, and magnetic field strengths. Findings from the experiments, matched by supporting single-fiber theories, showed significant efficiency increases at the low face velocity of 1.5 cm s(-1) at all magnetite compositions, with efficiencies more than doubling due to magnetic field alignment in certain cases. At a higher face velocity of 5.12 cm s(-1), filtration efficiencies were less affected by the magnetic field alignment being, at most, 43% higher for magnetite weight compositions up to 30%, while at a face velocity of 10.23 cm s(-1) alignment effects were insignificant. In most cases, efficiencies became independent of magnetic field strength above 50 mT, suggesting full alignment of the fibers. The present data suggest that fiber alignment in a magnetic field may warrant applications in the filtration and detection of fibers, such as asbestos. PMID:19693722
Rod-filter-field optimization of the J-PARC RF-driven H- ion source
NASA Astrophysics Data System (ADS)
Ueno, A.; Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.
2015-04-01
In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H- ion beam of 60mA within normalized emittances of 1.5?mm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500?s×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H- ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H- ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM's gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H- ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM's cross-section (magnetmotive force) was indispensable for easy operation with the temperature (TPE) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM's cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for TPE around 60°C.
Microwave-based medical diagnosis using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Modiri, Arezoo
This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.
NASA Astrophysics Data System (ADS)
Kela, K. B.; Arya, L. D.
2014-09-01
This paper describes a methodology for determination of optimum failure rate and repair time for each section of a radial distribution system. An objective function in terms of reliability indices and their target values is selected. These indices depend mainly on failure rate and repair time of a section present in a distribution network. A cost is associated with the modification of failure rate and repair time. Hence the objective function is optimized subject to failure rate and repair time of each section of the distribution network considering the total budget allocated to achieve the task. The problem has been solved using differential evolution and bare bones particle swarm optimization. The algorithm has been implemented on a sample radial distribution system.
Yu, Xiaobing; Cao, Jie; Shan, Haiyan; Zhu, Li; Guo, Jun
2014-01-01
Particle swarm optimization (PSO) and differential evolution (DE) are both efficient and powerful population-based stochastic search techniques for solving optimization problems, which have been widely applied in many scientific and engineering fields. Unfortunately, both of them can easily fly into local optima and lack the ability of jumping out of local optima. A novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out on current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. Compared with PSO, DE, and their variants, the performance of HPSO-DE is competitive. The balanced parameter sensitivity is discussed in detail. PMID:24688370
NASA Astrophysics Data System (ADS)
Makino, Yohei; Fujii, Toshinori; Imai, Jun; Funabiki, Shigeyuki
Recently, it is desired to develop energy control technologies for environmental issues such as global warming and exhaustion of fossil fuel. Power fluctuations in large power consumers may cause the instability of electric power systems and increase the cost of the electric power facility and electricity charges. Developing the electric power-leveling systems (EPLS) to compensate the power fluctuations is necessary for future electric power systems. Now, EPLS with an SMES have been proposed as one of the countermeasures for the electric power quality improvement. The SMES is superior to other energy storage devices in response and storage efficiency. The authors proposed the EPLS based on fussy control with the SMES. For this practical implementation, optimizing control gain and SMES capacity is an important issue. This paper proposes a new optimization method of the EPLS. The proposed algorithm is novel particle swarm optimization based on taper-off reflectance (TRPSO). The proposed TRPSO optimizes the design variables of the EPLS efficiently and effectively.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Particle velocity estimation based on a two-microphone array and Kalman filter.
Bai, Mingsian R; Juan, Shen-Wei; Chen, Ching-Cheng
2013-03-01
A traditional method to measure particle velocity is based on the finite difference (FD) approximation of pressure gradient by using a pair of well matched pressure microphones. This approach is known to be sensitive to sensor noise and mismatch. Recently, a double hot-wire sensor termed Microflown became available in light of micro-electro-mechanical system technology. This sensor eliminates the robustness issue of the conventional FD-based methods. In this paper, an alternative two-microphone approach termed the u-sensor is developed from the perspective of robust adaptive filtering. With two ordinary microphones, the proposed u-sensor does not require novel fabrication technology. In the method, plane wave and spherical wave models are employed in the formulation of a Kalman filter with process and measurement noise taken into account. Both numerical and experimental investigations were undertaken to validate the proposed u-sensor technique. The results have shown that the proposed approach attained better performance than the FD method, and comparable performance to a Microflown sensor. PMID:23464014
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Video Tracking Using Dual-Tree Wavelet Polar Matching and Rao-Blackwellised Particle Filter
NASA Astrophysics Data System (ADS)
Pang, Sze Kim; Nelson, James D. B.; Godsill, Simon J.; Kingsbury, Nick
2010-12-01
We describe a video tracking application using the dual-tree Polar Matching Algorithm. We develop the dynamical and observation models in a probabilistic setting and study the empirical probability distribution of the Polar Matching output. We model the visible and occluded target statistics using Beta distributions. This is incorporated into a Track-Before-Detect (TBD) solution for the overall observation likelihood of each video frame and provides a principled derivation of the observation likelihood. Due to the nonlinear nature of the problem, we design a Rao-Blackwellised Particle Filter (RBPF) for the sequential inference. Computer simulations demonstrate the ability of the algorithm to track a simulated video moving target in an urban environment with complete and partial occlusions.
Indoor anti-occlusion visible light positioning systems based on particle filtering
NASA Astrophysics Data System (ADS)
Jiang, Meng; Huang, Zhitong; Li, Jianfeng; Zhang, Ruqi; Ji, Yuefeng
2015-04-01
As one of the most popular categories of mobile services, a rapid growth of indoor location-based services has been witnessed over the past decades. Indoor positioning methods based on Wi-Fi, radio-frequency identification or Bluetooth are widely commercialized; however, they have disadvantages such as low accuracy or high cost. An emerging method using visible light is under research recently. The existed visible light positioning (VLP) schemes using carrier allocation, time allocation and multiple receivers all have limitations. This paper presents a novel mechanism using particle filtering in VLP system. By this method no additional devices are needed and the occlusion problem in visible light would be alleviated which will effectively enhance the flexibility for indoor positioning.
Canedo-Rodriguez, Adrian; Rodriguez, Jose Manuel; Alvarez-Santos, Victor; Iglesias, Roberto; Regueiro, Carlos V
2015-01-01
In wireless positioning systems, the transmitter's power is usually fixed. In this paper, we explore the use of varying transmission powers to increase the performance of a wireless localization system. To this extent, we have designed a robot positioning system based on wireless motes. Our motes use an inexpensive, low-power sub-1-GHz system-on-chip (CC1110) working in the 433-MHz ISM band. Our localization algorithm is based on a particle filter and infers the robot position by: (1) comparing the power received with the expected one; and (2) integrating the robot displacement. We demonstrate that the use of transmitters that vary their transmission power over time improves the performance of the wireless positioning system significantly, with respect to a system that uses fixed power transmitters. This opens the door for applications where the robot can localize itself actively by requesting the transmitters to change their power in real time. PMID:25942641
Detecting disease outbreaks using a combined Bayesian network and particle filter approach.
Dawson, Peter; Gailis, Ralph; Meehan, Alaster
2015-04-01
Evaluating whether a disease outbreak has occurred based on limited information in medical records is inherently a probabilistic problem. This paper presents a methodology for consistently analysing the probability that a disease targeted by a surveillance system has appeared in the population, based on the medical records of the individuals within the target population, using a Bayesian network. To enable the system to produce a probability density function of the fraction of the population that is infected, a mathematically consistent conjoining of Bayesian networks and particle filters is used. This approach is tested against the default algorithm of ESSENCE Desktop Edition (which adaptively uses Poisson, exponentially weighted moving average and linear regression techniques as needed), and is shown, for the simulated test data used, to give significantly shorter detection times at false alarm rates of practical interest. This methodology shows promise to greatly improve detection times for outbreaks in populations where timely electronic health records are available for data-mining. PMID:25637764
Canedo-Rodriguez, Adrian; Rodriguez, Jose Manuel; Alvarez-Santos, Victor; Iglesias, Roberto; Regueiro, Carlos V.
2015-01-01
In wireless positioning systems, the transmitter's power is usually fixed. In this paper, we explore the use of varying transmission powers to increase the performance of a wireless localization system. To this extent, we have designed a robot positioning system based on wireless motes. Our motes use an inexpensive, low-power sub-1-GHz system-on-chip (CC1110) working in the 433-MHz ISM band. Our localization algorithm is based on a particle filter and infers the robot position by: (1) comparing the power received with the expected one; and (2) integrating the robot displacement. We demonstrate that the use of transmitters that vary their transmission power over time improves the performance of the wireless positioning system significantly, with respect to a system that uses fixed power transmitters. This opens the door for applications where the robot can localize itself actively by requesting the transmitters to change their power in real time. PMID:25942641
Niederhauser, Thomas; Wyss-Balmer, Thomas; Haeberlin, Andreas; Marisa, Thanks; Wildhaber, Reto A; Goette, Josef; Jacomet, Marcel; Vogel, Rolf
2015-06-01
Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here, we present a graphics processor unit (GPU)-based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to autoregressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and four times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a seven-day high-resolution ECG is computed within less than 3Â s. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced. PMID:25675449
Field, Matthew A.; Cho, Vicky
2015-01-01
A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality â€˜genome in a bottleâ€™ reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436
NASA Astrophysics Data System (ADS)
Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.
2012-04-01
Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.
Spatio-spectral color filter array design for optimal image recovery.
Hirakawa, Keigo; Wolfe, Patrick J
2008-10-01
In digital imaging applications, data are typically obtained via a spatial subsampling procedure implemented as a color filter array-a physical construction whereby only a single color value is measured at each pixel location. Owing to the growing ubiquity of color imaging and display devices, much recent work has focused on the implications of such arrays for subsequent digital processing, including in particular the canonical demosaicking task of reconstructing a full color image from spatially subsampled and incomplete color data acquired under a particular choice of array pattern. In contrast to the majority of the demosaicking literature, we consider here the problem of color filter array design and its implications for spatial reconstruction quality. We pose this problem formally as one of simultaneously maximizing the spectral radii of luminance and chrominance channels subject to perfect reconstruction, and-after proving sub-optimality of a wide class of existing array patterns-provide a constructive method for its solution that yields robust, new panchromatic designs implementable as subtractive colors. Empirical evaluations on multiple color image test sets support our theoretical results, and indicate the potential of these patterns to increase spatial resolution for fixed sensor size, and to contribute to improved reconstruction fidelity as well as significantly reduced hardware complexity. PMID:18784035
Spectral filtering optimization of a measuring channel of an x-ray broadband spectrometer
NASA Astrophysics Data System (ADS)
Emprin, B.; Troussel, Ph.; Villette, B.; Delmotte, F.
2013-05-01
A new channel of an X-ray broadband spectrometer has been developed for the 2 - 4 keV spectral range. It uses a spectral filtering by using a non-periodic multilayer mirror. This channel is composed by a filter, an aperiodic multilayer mirror and a detector. The design and realization of the optical coating mirror has been defined such as the reflectivity is above 8% in almost the entire bandwidth range 2 - 4 keV and lower than 2% outside. The mirror is optimized for working at 1.9Â° grazing incidence. The mirror is coated with a stack of 115 chromium / scandium (Cr / Sc) non-periodic layers, between 0.6 nm and 7.3 nm and a 3 nm thick top SiO2 layer to protect the stack from oxidization. To control thin thicknesses, we produced specific multilayer mirrors which consist on a superposition of two periodic Cr / Sc multilayers with the layer to calibrate in between. The mirror and subnanometric layers characterizations were made at the "Laboratoire Charles Fabry" (LCF) with a grazing incidence reflectometer working at 8.048 keV (Cu KÎ± radiation) and at the synchrotron radiation facility SOLEIL on the hard X-ray branch of the "Metrology" beamline. The reflectivity of the mirrors as a function of the photon energy was obtained in the Physikalisch Technische Bundesanstalt (PTB) laboratory at the synchrotron radiation facility Bessy II.
Application of digital tomosynthesis (DTS) of optimal deblurring filters for dental X-ray imaging
NASA Astrophysics Data System (ADS)
Oh, J. E.; Cho, H. S.; Kim, D. S.; Choi, S. I.; Je, U. K.
2012-04-01
Digital tomosynthesis (DTS) is a limited-angle tomographic technique that provides some of the tomographic benefits of computed tomography (CT) but at reduced dose and cost. Thus, the potential for application of DTS to dental X-ray imaging seems promising. As a continuation of our dental radiography R&D, we developed an effective DTS reconstruction algorithm and implemented it in conjunction with a commercial dental CT system for potential use in dental implant placement. The reconstruction algorithm employed a backprojection filtering (BPF) method based upon optimal deblurring filters to suppress effectively both the blur artifacts originating from the out-focus planes and the high-frequency noise. To verify the usefulness of the reconstruction algorithm, we performed systematic simulation works and evaluated the image characteristics. We also performed experimental works in which DTS images of enhanced anatomical resolution were successfully obtained by using the algorithm and were promising to our ongoing applications to dental X-ray imaging. In this paper, our approach to the development of the DTS reconstruction algorithm and the results are described in detail.
An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter
NASA Astrophysics Data System (ADS)
Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning
2015-08-01
An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/??.
NASA Astrophysics Data System (ADS)
Takeda, Yasuhiko; Iizuka, Hideo; Ito, Tadashi; Mizuno, Shintaro; Hasegawa, Kazuo; Ichikawa, Tadashi; Ito, Hiroshi; Kajino, Tsutomu; Higuchi, Kazuo; Ichiki, Akihisa; Motohiro, Tomoyoshi
2015-08-01
We have theoretically investigated photovoltaic cells used under the illumination condition of monochromatic light incident from a particular direction, which is very different from that for solar cells under natural sunlight, using detailed balance modeling. A multilayer bandpass filter formed on the surface of the cell has been found to trap the light generated by radiative recombination inside the cell, reduce emission from the cell, and consequently improve conversion efficiency. The light trapping mechanism is interpreted in terms of a one-dimensional photonic crystal, and the design guide to optimize the multilayer structure has been clarified. For obliquely incident illumination, as well as normal incidence, a significant light trapping effect has been achieved, although the emission patterns are extremely different from each other depending on the incident directions.
Diesel passenger car PM emissions: From Euro 1 to Euro 4 with particle filter
NASA Astrophysics Data System (ADS)
Tzamkiozis, Theodoros; Ntziachristos, Leonidas; Samaras, Zissis
2010-03-01
This paper examines the impact of the emission control and fuel technology development on the emissions of gaseous and, in particular, PM pollutants from diesel passenger cars. Three cars in five configurations in total were measured, and covered the range from Euro 1 to Euro 4 standards. The emission control ranged from no aftertreatment in the Euro 1 case, an oxidation catalyst in Euro 2, two oxidation catalysts and exhaust gas recirculation in Euro 3 and Euro 4, while a catalyzed diesel particle filter (DPF) fitted in the Euro 4 car led to a Euro 4 + DPF configuration. Both certification test and real-world driving cycles were employed. The results showed that CO and HC emissions were much lower than the emission standard over the hot-start real-world cycles. However, vehicle technologies from Euro 2 to Euro 4 exceeded the NOx and PM emission levels over at least one real-world cycle. The NOx emission level reached up to 3.6 times the certification level in case of the Euro 4 car. PM were up to 40% and 60% higher than certification level for the Euro 2 and Euro 3 cars, while the Euro 4 car emitted close or slightly below the certification level over the real-world driving cycles. PM mass reductions from Euro 1 to Euro 4 were associated with a relevant decrease in the total particle number, in particular over the certification test. This was not followed by a respective reduction in the solid particle number which remained rather constant between the four technologies at 0.86 Ã— 10 14 km -1 (coefficient of variation 9%). As a result, the ratio of solid vs. total particle number ranged from Ëœ50% in Euro 1-100% in Euro 4. A significant reduction of more than three orders of magnitude in solid particle number is achieved with the introduction of the DPF. However, the potential for nucleation mode formation at high speed from the DPF car is an issue that needs to be considered in the over all assessment of its environmental benefit. Finally, comparison of the mobility and aerodynamic diameters of airborne particles led to fractal dimensions dropping from 2.60 (Euro 1) to 2.51 (Euro 4), denoting a more loose structure with improving technology.
Templeton, Michael R; Andrews, Robert C; Hofmann, Ron
2007-06-01
This bench-scale study investigated the passage of particle-associated bacteriophage through a dual-media (anthracite-sand) filter over a complete filter cycle and the effect on subsequent ultraviolet (UV) disinfection. Two model viruses, bacteriophages MS2 and T4, were considered. The water matrix was de-chlorinated tap water with either kaolin or Aldrich humic acid (AHA) added and coagulated with alum to form floc before filtration. The turbidity of the influent flocculated water was 6.4+/-1.5 NTU. Influent and filter effluent turbidity and particle counts were measured as well as headloss across the filter media. Filter effluent samples were collected for phage enumeration during three filter cycle stages: (i) filter ripening; (ii) stable operation; and (iii) end of filter cycle. Stable filter operation was defined according to a filter effluent turbidity goal of <0.3 NTU. Influent and filter effluent samples were subsequently exposed to UV light (254 nm) at 40 mJ/cm(2) using a low pressure UV collimated beam. The study found statistically significant differences (alpha=0.05) in the quantity of particle-associated phage present in the filter effluent during the three stages of filtration. There was reduced UV disinfection efficiency due to the presence of particle-associated phage in the filter effluent in trials with bacteriophage MS2 and humic acid floc. Unfiltered influent water samples also resulted in reduced UV inactivation of phage relative to particle-free control conditions for both phages. Trends in filter effluent turbidity corresponded with breakthrough of particle-associated phage in the filter effluent. The results therefore suggest that maintenance of optimum filtration conditions upstream of UV disinfection is a critical barrier to particle-associated viruses. PMID:17433406
Cho, Kyungmin Jacob; Turkevich, Leonid; Miller, Matthew; McKay, Roy; Grinshpun, Sergey A.; Ha, KwonChul; Reponen, Tiina
2015-01-01
This study investigated differences in penetration between fibers and spherical particles through faceseal leakage of an N95 filtering facepiece respirator. Three cyclic breathing flows were generated corresponding to mean inspiratory flow rates (MIF) of 15, 30, and 85 L/min. Fibers had a mean diameter of 1 ?m and a median length of 4.9 ?m (calculated aerodynamic diameter, dae = 1.73 ?m). Monodisperse polystyrene spheres with a mean physical diameter of 1.01 ?m (PSI) and 1.54 ?m (PSII) were used for comparison (calculated dae = 1.05 and 1.58 ?m, respectively). Two optical particle counters simultaneously determined concentrations inside and outside the respirator. Geometric means (GMs) for filter penetration of the fibers were 0.06, 0.09, and 0.08% at MIF of 15, 30, and 85 L/min, respectively. Corresponding values for PSI were 0.07, 0.12, and 0.12%. GMs for faceseal penetration of fibers were 0.40, 0.14, and 0.09% at MIF of 15, 30, and 85 L/min, respectively. Corresponding values for PSI were 0.96, 0.41, and 0.17%. Faceseal penetration decreased with increased breathing rate for both types of particles (p ? 0.001). GMs of filter and faceseal penetration of PSII at an MIF of 30 L/min were 0.14% and 0.36%, respectively. Filter penetration and faceseal penetration of fibers were significantly lower than those of PSI (p < 0.001) and PSII (p < 0.003). This confirmed that higher penetration of PSI was not due to slightly smaller aerodynamic diameter, indicating that the shape of fibers rather than their calculated mean aerodynamic diameter is a prevailing factor on deposition mechanisms through the tested respirator. In conclusion, faceseal penetration of fibers and spherical particles decreased with increasing breathing rate, which can be explained by increased capture by impaction. Spherical particles had 2.0–2.8 times higher penetration through faceseal leaks and 1.1–1.5 higher penetration through filter media than fibers, which can be attributed to differences in interception losses. PMID:23339437
Human tracking in thermal images using adaptive particle filters with online random forest learning
NASA Astrophysics Data System (ADS)
Ko, Byoung Chul; Kwak, Joon-Young; Nam, Jae-Yeal
2013-11-01
This paper presents a fast and robust human tracking method to use in a moving long-wave infrared thermal camera under poor illumination with the existence of shadows and cluttered backgrounds. To improve the human tracking performance while minimizing the computation time, this study proposes an online learning of classifiers based on particle filters and combination of a local intensity distribution (LID) with oriented center-symmetric local binary patterns (OCS-LBP). Specifically, we design a real-time random forest (RF), which is the ensemble of decision trees for confidence estimation, and confidences of the RF are converted into a likelihood function of the target state. First, the target model is selected by the user and particles are sampled. Then, RFs are generated using the positive and negative examples with LID and OCS-LBP features by online learning. The learned RF classifiers are used to detect the most likely target position in the subsequent frame in the next stage. Then, the RFs are learned again by means of fast retraining with the tracked object and background appearance in the new frame. The proposed algorithm is successfully applied to various thermal videos as tests and its tracking performance is better than those of other methods.
Tracking of mitochondrial transports using a particle filtering method with a spatial constraint
NASA Astrophysics Data System (ADS)
Hong, Sungmin; Shim, Hackjoon; Chung, Yoojin
2011-09-01
This paper describes a tracking method to trace the movements of fluorescently labeled mitochondria in time-lapse image sequences. It is based on particle filtering, which is a state-of-the-art tracking method, and is enhanced with a spatial constraint to improve robustness. Since mitochondria move only through axons, the spatial constraint is generated by axon segmentation on a single frame, which is the average of all the frames. The spatial constraint limits the search space of the state vector and, consequently, lowers the chance for the tracking to get lost. Using a background subtraction algorithm, the proposed method is also equipped with automatic detection of starting points, thus minimizing the requirement for user input. Implementation of the proposed method for tracking of fluorescently labeled mitochondria in time-lapse images showed substantially improved robustness and speed compared to a conventional method. With these improvements, this new particle tracking method is expected to increase the throughput of fluorescently labeled mitochondrial transport experiments, which are required for neuroscience research.
Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen
2009-01-15
A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.
OPTIMIZATION OF COAL PARTICLE FLOW PATTERNS IN LOW NOX BURNERS
Jost O.L. Wendt; Gregory E. Ogden; Jennifer Sinclair; Stephanus Budilarto
2001-09-04
It is well understood that the stability of axial diffusion flames is dependent on the mixing behavior of the fuel and combustion air streams. Combustion aerodynamic texts typically describe flame stability and transitions from laminar diffusion flames to fully developed turbulent flames as a function of increasing jet velocity. Turbulent diffusion flame stability is greatly influenced by recirculation eddies that transport hot combustion gases back to the burner nozzle. This recirculation enhances mixing and heats the incoming gas streams. Models describing these recirculation eddies utilize conservation of momentum and mass assumptions. Increasing the mass flow rate of either fuel or combustion air increases both the jet velocity and momentum for a fixed burner configuration. Thus, differentiating between gas velocity and momentum is important when evaluating flame stability under various operating conditions. The research efforts described herein are part of an ongoing project directed at evaluating the effect of flame aerodynamics on NO{sub x} emissions from coal fired burners in a systematic manner. This research includes both experimental and modeling efforts being performed at the University of Arizona in collaboration with Purdue University. The objective of this effort is to develop rational design tools for optimizing low NO{sub x} burners. Experimental studies include both cold-and hot-flow evaluations of the following parameters: primary and secondary inlet air velocity, coal concentration in the primary air, coal particle size distribution and flame holder geometry. Hot-flow experiments will also evaluate the effect of wall temperature on burner performance.
High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems
NASA Astrophysics Data System (ADS)
Wachowiak, M. P.; Sarlo, B. B.; Lambe Foster, A. E.
2014-10-01
Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that "supercomputing on a budget" is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task.
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
Zhou, Yi; Zhang, Shaojun; Liu, Ying; Yang, Hongsheng
2014-01-01
Industrial aquaculture wastewater contains large quantities of suspended particles that can be easily broken down physically. Introduction of macro-bio-filters, such as bivalve filter feeders, may offer the potential for treatment of fine suspended matter in industrial aquaculture wastewater. In this study, we employed two kinds of bivalve filter feeders, the Pacific oyster Crassostrea gigas and the blue mussel Mytilus galloprovincialis, to deposit suspended solids from marine fish aquaculture wastewater in flow-through systems. Results showed that the biodeposition rate of suspended particles by C. gigas (shell height: 8.67±0.99 cm) and M. galloprovincialis (shell height: 4.43±0.98 cm) was 77.84±7.77 and 6.37±0.67 mg ind?1•d?1, respectively. The total solid suspension (TSS) deposition rates of oyster and mussel treatments were 3.73±0.27 and 2.76±0.20 times higher than that of the control treatment without bivalves, respectively. The TSS deposition rates of bivalve treatments were significantly higher than the natural sedimentation rate of the control treatment (P<0.001). Furthermore, organic matter and C, N in the sediments of bivalve treatments were significantly lower than those in the sediments of the control (P<0.05). It was suggested that the filter feeders C. gigas and M. galloprovincialis had considerable potential to filter and accelerate the deposition of suspended particles from industrial aquaculture wastewater, and simultaneously yield value-added biological products. PMID:25250730
Zhou, Yi; Zhang, Shaojun; Liu, Ying; Yang, Hongsheng
2014-01-01
Industrial aquaculture wastewater contains large quantities of suspended particles that can be easily broken down physically. Introduction of macro-bio-filters, such as bivalve filter feeders, may offer the potential for treatment of fine suspended matter in industrial aquaculture wastewater. In this study, we employed two kinds of bivalve filter feeders, the Pacific oyster Crassostrea gigas and the blue mussel Mytilus galloprovincialis, to deposit suspended solids from marine fish aquaculture wastewater in flow-through systems. Results showed that the biodeposition rate of suspended particles by C. gigas (shell height: 8.67 ± 0.99 cm) and M. galloprovincialis (shell height: 4.43 ± 0.98 cm) was 77.84 ± 7.77 and 6.37 ± 0.67 mg ind(-1) • d(-1), respectively. The total solid suspension (TSS) deposition rates of oyster and mussel treatments were 3.73 ± 0.27 and 2.76 ± 0.20 times higher than that of the control treatment without bivalves, respectively. The TSS deposition rates of bivalve treatments were significantly higher than the natural sedimentation rate of the control treatment (P < 0.001). Furthermore, organic matter and C, N in the sediments of bivalve treatments were significantly lower than those in the sediments of the control (P < 0.05). It was suggested that the filter feeders C. gigas and M. galloprovincialis had considerable potential to filter and accelerate the deposition of suspended particles from industrial aquaculture wastewater, and simultaneously yield value-added biological products. PMID:25250730
NASA Astrophysics Data System (ADS)
Howard-Reed, Cynthia; Wallace, Lance A.; Emmerich, Steven J.
Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we measured the deposition of particles ranging from 0.3 to 10 Î¼m in an occupied townhouse and also in an unoccupied test house. Experiments were run with three different sources (cooking with a gas stove, citronella candle, pouring kitty litter), with the central heating and air conditioning (HAC) fan on or off, and with two different types of in-duct filters (electrostatic precipitator and ordinary furnace filter). Particle size, HAC fan operation, and the electrostatic precipitator had significant effects on particle loss rates. The standard furnace filter had no effect. Surprisingly, the type of source (combustion vs. mechanical generation) and the type of furnishings (fully furnished including carpet vs. largely unfurnished including mostly bare floor) also had no measurable effect on the deposition rates of particles of comparable size. With the HAC fan off, average deposition rates varied from 0.3 h -1 for the smallest particle range (0.3-0.5 Î¼m) to 5.2 h -1 for particles greater than 10 Î¼m. Operation of the central HAC fan approximately doubled these rates for particles <5 Î¼m, and increased rates by 2 h -1 for the larger particles. An in-duct electrostatic precipitator increased the loss rates compared to the fan-off condition by factors of 5-10 for particles <2.5 Î¼m, and by a factor of 3 for 2.5-5.0 Î¼m particles. In practical terms, use of the central fan alone could reduce indoor particle concentrations by 25-50%, and use of an in-duct ESP could reduce particle concentrations by 55-85% compared to fan-off conditions.
The absorptivity and imaginary index of refraction for carbon and methylene blue particles were inferred from the photoacoustic spectra of samples collected on Teflon filter substrates. Three models of varying complexity were developed to describe the photoacoustic signal as a fu...
NASA Technical Reports Server (NTRS)
Stewart, Elwood C.
1961-01-01
The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.
Identification of CpG islands in DNA sequences using statistically optimal null filters
2012-01-01
CpG dinucleotide clusters also referred to as CpG islands (CGIs) are usually located in the promoter regions of genes in a deoxyribonucleic acid (DNA) sequence. CGIs play a crucial role in gene expression and cell differentiation, as such, they are normally used as gene markers. The earlier CGI identification methods used the rich CpG dinucleotide content in CGIs, as a characteristic measure to identify the locations of CGIs. The fact, that the probability of nucleotide G following nucleotide C in a CGI is greater as compared to a non-CGI, is employed by some of the recent methods. These methods use the difference in transition probabilities between subsequent nucleotides to distinguish between a CGI from a non-CGI. These transition probabilities vary with the data being analyzed and several of them have been reported in the literature sometimes leading to contradictory results. In this article, we propose a new and efficient scheme for identification of CGIs using statistically optimal null filters. We formulate a new CGI identification characteristic to reliably and efficiently identify CGIs in a given DNA sequence which is devoid of any ambiguities. Our proposed scheme combines maximum signal-to-noise ratio and least squares optimization criteria to estimate the CGI identification characteristic in the DNA sequence. The proposed scheme is tested on a number of DNA sequences taken from human chromosomes 21 and 22, and proved to be highly reliable as well as efficient in identifying the CGIs. PMID:22931396
NASA Astrophysics Data System (ADS)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
Design Optimization of Pin Fin Geometry Using Particle Swarm Optimization Algorithm
Hamadneh, Nawaf; Khan, Waqar A.; Sathasivam, Saratha; Ong, Hong Choon
2013-01-01
Particle swarm optimization (PSO) is employed to investigate the overall performance of a pin fin.The following study will examine the effect of governing parameters on overall thermal/fluid performance associated with different fin geometries, including, rectangular plate fins as well as square, circular, and elliptical pin fins. The idea of entropy generation minimization, EGM is employed to combine the effects of thermal resistance and pressure drop within the heat sink. A general dimensionless expression for the entropy generation rate is obtained by considering a control volume around the pin fin including base plate and applying the conservations equations for mass and energy with the entropy balance. Selected fin geometries are examined for the heat transfer, fluid friction, and the minimum entropy generation rate corresponding to different parameters including axis ratio, aspect ratio, and Reynolds number. The results clearly indicate that the preferred fin profile is very dependent on these parameters. PMID:23741525
Guan, Fada; Bronk, Lawrence; Titt, Uwe; Lin, Steven H.; Mirkovic, Dragan; Kerr, Matthew D.; Zhu, X. Ronald; Dinh, Jeffrey; Sobieski, Mary; Stephan, Clifford; Peeler, Christopher R.; Taleei, Reza; Mohan, Radhe; Grosshans, David R.
2015-01-01
The physical properties of particles used in radiation therapy, such as protons, have been well characterized, and their dose distributions are superior to photon-based treatments. However, proton therapy may also have inherent biologic advantages that have not been capitalized on. Unlike photon beams, the linear energy transfer (LET) and hence biologic effectiveness of particle beams varies along the beam path. Selective placement of areas of high effectiveness could enhance tumor cell kill and simultaneously spare normal tissues. However, previous methods for mapping spatial variations in biologic effectiveness are time-consuming and often yield inconsistent results with large uncertainties. Thus the data needed to accurately model relative biological effectiveness to guide novel treatment planning approaches are limited. We used Monte Carlo modeling and high-content automated clonogenic survival assays to spatially map the biologic effectiveness of scanned proton beams with high accuracy and throughput while minimizing biological uncertainties. We found that the relationship between cell kill, dose, and LET, is complex and non-unique. Measured biologic effects were substantially greater than in most previous reports, and non-linear surviving fraction response was observed even for the highest LET values. Extension of this approach could generate data needed to optimize proton therapy plans incorporating variable RBE. PMID:25984967
Real-time discharge estimation method using 2D dynamic wave model and particle filter
NASA Astrophysics Data System (ADS)
Kim, Y.; Tachikawa, Y.; Shiiba, M.; Kim, S.; Yorozu, K.; Noh, S.
2011-12-01
One of the most important components is Manning roughness coefficient in estimation and prediction of river discharge and water level. Nevertheless, Manning roughness coefficient is determined empirically. Moreover, River discharge data is essential for water resource management and hydrological model calibration. In spite of such an importance, river discharge is generally estimated by using observed water stage data and a rating curve. As an alternative to get more precise discharge data, hydraulic model simulation has been considered in some cases. However, hydraulic model simulation with a deterministic model condition also has limitations in consideration of the time variant river characteristics, and thus it is difficult to predict the prospective state of river flow. In fact, natural river flow conditions are continuously changed depending on season, river geomorphology, in-stream vegetation and so on. In particular, the variations of river flow conditions during flood events are extremely changed, and this phenomenon has been reported many times. In this study, we aimed to present the method which can reflect the natural river flow characteristics (time variant Manning coefficient) within a hydraulic model simulation for more exact analysis and prediction of river discharges and test the tracking ability in case that the disturbed discharge estimated from hydrological model utilized as input conditions. In dealing with this problem, we introduce a simple 2D dynamic wave model, which can reflect the geomorphologic effect, and Monte Carlo sequential data assimilation scheme (or Particle Filtering scheme), which is adequate to non-linear system and able to reflect the time variation of state variable and parameters. Based on the Sequential Importance Resampling (SIR) method within the Particle Filtering scheme, the parameters and state values of the dynamic wave model are sequentially updated to consider the observed water stage information every hour. However, we have a limited data for verifying our method, so we use the synthetic experiment. The method was applied to the middle reach of the Kastura River in Kyoto, Japan. The length of the modeled river channel is about 10km, and there are 3 water level stations and 4 weirs within the channel.
MODIS SCA assimilation with the particle filter for improving discharge simulation
NASA Astrophysics Data System (ADS)
Thirel, G.; Salamon, P.; Burek, P.; Kalas, M.
2012-04-01
LISFLOOD is a distributed, semi-physical rainfall-runoff model designed for the simulation of hydrological processes in medium to large scale river basins. This model is used at the European Commission Joint Research Centre for studying floods, global hydrological changes and droughts. LISFLOOD is the basis of the European Flood Alert System (EFAS), which is a real-time probabilistic flood prediction system with a lead-time of up to 10 days. The aim of this study is to evaluate the feasibility of assimilation of satellite snow data into LISFLOOD. Furthermore, the impact of the assimilation on the snow simulation as well as on discharge will be assessed. For this purpose, MODIS Snow Cover Area (SCA) has been used here. Since cloud coverage limits the availability of MODIS data, we implemented methods for improving the data set, such as - combination of the data from the two MODIS satellites - merging data from previous days - extrapolate data from neighboring pixels - extrapolate data from pixels with similar altitudes. The data provided by the MODIS satellites is SCA, i.e. presence or not of snow, whereas the LISFLOOD model simulates Snow Water Equivalent (SWE). For the conversion from SWE to SCA we employed a snow depletion curve. The assimilation method used is the particle filter. This method is based on multiple perturbed simulations of the model, which at each assimilation time step are either kept or removed based on the similarity between the modeled SCA and the observed SCA (i.e., MODIS data). One major advantage of the particle filter as applied here is, that model states are not modified directly and hence the model conserves the mass balance throughout the assimilation. Tests have been performed on synthetic data (normal LISFLOOD SCA used as observations) on a small basin (1-dimensional problem) and on a larger basin (7-dimensional problem), both located in the Czech Morava River basin. These experiments showed the positive performance of the assimilation for improving SCA and model discharges. The impact of the observation error used has been assessed, as well as the impact of the frequency of assimilation (from 1 to 7 days). Finally, tests of assimilation of actual MODIS SCA data have been performed on the small and on the large basin (including the same tests on frequency of assimilation and on observations error). We showed that SCA was improved for all cases, but that discharges were not necessarily improved for high assimilation frequencies or significantly large observation errors. Increasing the dimension of the problem (from 1 to 7) deteriorates the performance of the assimilation system.
NASA Astrophysics Data System (ADS)
Hendrix, Charles D.; Vijaya Kumar, B. V. K.
1994-06-01
Correlation filters with three transmittance levels (+1, 0, and -1) are of interest in optical pattern recognition because they can be implemented on available spatial light modulators and because the zero level allows us to include a region of support (ROS). The ROS can provide additional control over the filter's noise tolerance and peak sharpness. A new algorithm based on optimizing a compromise average performance measure (CAPM) is proposed for designing three-level composite filters. The performance of this algorithm is compared to other three-level composite filter designs using a common image database and using figures of merit such as the Fisher ratio, error rate, and light efficiency. It is shown that the CAPM algorithm yields better results.
The determination and optimization of (rutile) pigment particle size distributions
NASA Technical Reports Server (NTRS)
Richards, L. W.
1972-01-01
A light scattering particle size test which can be used with materials having a broad particle size distribution is described. This test is useful for pigments. The relation between the particle size distribution of a rutile pigment and its optical performance in a gray tint test at low pigment concentration is calculated and compared with experimental data.
LEE, Chang Jun
2015-01-01
In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study. PMID:26027708
Lee, Chang Jun
2015-12-01
In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study. PMID:26027708
Yuan, Liang; Zhu, Junda; Wang, Lina; Brown, A.
2012-01-01
Neurofilaments are long flexible cytoplasmic protein polymers that are transported rapidly but intermittently along the axonal processes of nerve cells. Current methods for studying this movement involve manual tracking of fluorescently tagged neurofilament polymers in videos acquired by time-lapse fluorescence microscopy. Here, we describe an automated tracking method that uses particle filtering to implement a recursive Bayesian estimation of the filament location in successive frames of video sequences. To increase the efficiency of this approach, we take advantage of the fact that neurofilament movement is confined within the boundaries of the axon. We use piecewise cubic spline interpolation to model the path of the axon and then we use this model to limit both the orientation and location of the neurofilament in the particle tracking algorithm. Based on these two spatial constraints, we develop a prior dynamic state model that generates significantly fewer particles than generic particle filtering, and we select an adequate observation model to produce a robust tracking method. We demonstrate the efficacy and efficiency of our method by performing tracking experiments on real time-lapse image sequences of neurofilament movement, and we show that the method performs well compared to manual tracking by an experienced user. This spatially constrained particle filtering approach should also be applicable to the movement of other axonally transported cargoes. PMID:21859599
Wakai, Nobuhide; Sumida, Iori; Otani, Yuki; Suzuki, Osamu; Seo, Yuji; Isohashi, Fumiaki; Yoshioka, Yasuo; Ogawa, Kazuhiko; Hasegawa, Masatoshi
2015-05-15
Purpose: The authors sought to determine the optimal collimator leaf margins which minimize normal tissue dose while achieving high conformity and to evaluate differences between the use of a flattening filter-free (FFF) beam and a flattening-filtered (FF) beam. Methods: Sixteen lung cancer patients scheduled for stereotactic body radiotherapy underwent treatment planning for a 7 MV FFF and a 6 MV FF beams to the planning target volume (PTV) with a range of leaf margins (âˆ’3 to 3 mm). Forty grays per four fractions were prescribed as a PTV D95. For PTV, the heterogeneity index (HI), conformity index, modified gradient index (GI), defined as the 50% isodose volume divided by target volume, maximum dose (Dmax), and mean dose (Dmean) were calculated. Mean lung dose (MLD), V20 Gy, and V5 Gy for the lung (defined as the volumes of lung receiving at least 20 and 5 Gy), mean heart dose, and Dmax to the spinal cord were measured as doses to organs at risk (OARs). Paired t-tests were used for statistical analysis. Results: HI was inversely related to changes in leaf margin. Conformity index and modified GI initially decreased as leaf margin width increased. After reaching a minimum, the two values then increased as leaf margin increased (â€œVâ€ shape). The optimal leaf margins for conformity index and modified GI were âˆ’1.1 Â± 0.3 mm (mean Â± 1 SD) and âˆ’0.2 Â± 0.9 mm, respectively, for 7 MV FFF compared to âˆ’1.0 Â± 0.4 and âˆ’0.3 Â± 0.9 mm, respectively, for 6 MV FF. Dmax and Dmean for 7 MV FFF were higher than those for 6 MV FF by 3.6% and 1.7%, respectively. There was a positive correlation between the ratios of HI, Dmax, and Dmean for 7 MV FFF to those for 6 MV FF and PTV size (R = 0.767, 0.809, and 0.643, respectively). The differences in MLD, V20 Gy, and V5 Gy for lung between FFF and FF beams were negligible. The optimal leaf margins for MLD, V20 Gy, and V5 Gy for lung were âˆ’0.9 Â± 0.6, âˆ’1.1 Â± 0.8, and âˆ’2.1 Â± 1.2 mm, respectively, for 7 MV FFF compared to âˆ’0.9 Â± 0.6, âˆ’1.1 Â± 0.8, and âˆ’2.2 Â± 1.3 mm, respectively, for 6 MV FF. With the heart inside the radiation field, the mean heart dose showed a V-shaped relationship with leaf margins. The optimal leaf margins were âˆ’1.0 Â± 0.6 mm for both beams. Dmax to the spinal cord showed no clear trend for changes in leaf margin. Conclusions: The differences in doses to OARs between FFF and FF beams were negligible. Conformity index, modified GI, MLD, lung V20 Gy, lung V5 Gy, and mean heart dose showed a V-shaped relationship with leaf margins. There were no significant differences in optimal leaf margins to minimize these parameters between both FFF and FF beams. The authorsâ€™ results suggest that a leaf margin of âˆ’1 mm achieves high conformity and minimizes doses to OARs for both FFF and FF beams.
Diesel particle filter and fuel effects on heavy-duty diesel engine emissions.
Ratcliff, Matthew A; Dane, A John; Williams, Aaron; Ireland, John; Luecke, Jon; McCormick, Robert L; Voorhees, Kent J
2010-11-01
The impacts of biodiesel and a continuously regenerated (catalyzed) diesel particle filter (DPF) on the emissions of volatile unburned hydrocarbons, carbonyls, and particle associated polycyclic aromatic hydrocarbons (PAH) and nitro-PAH, were investigated. Experiments were conducted on a 5.9 L Cummins ISB, heavy-duty diesel engine using certification ultra-low-sulfur diesel (ULSD, S ? 15 ppm), soy biodiesel (B100), and a 20% blend thereof (B20). Against the ULSD baseline, B20 and B100 reduced engine-out emissions of measured unburned volatile hydrocarbons and PM associated PAH and nitro-PAH by significant percentages (40% or more for B20 and higher percentage for B100). However, emissions of benzene were unaffected by the presence of biodiesel and emissions of naphthalene actually increased for B100. This suggests that the unsaturated FAME in soy-biodiesel can react to form aromatic rings in the diesel combustion environment. Methyl acrylate and methyl 3-butanoate were observed as significant species in the exhaust for B20 and B100 and may serve as markers of the presence of biodiesel in the fuel. The DPF was highly effective at converting gaseous hydrocarbons and PM associated PAH and total nitro-PAH. However, conversion of 1-nitropyrene by the DPF was less than 50% for all fuels. Blending of biodiesel caused a slight reduction in engine-out emissions of acrolein, but otherwise had little effect on carbonyl emissions. The DPF was highly effective for conversion of carbonyls, with the exception of formaldehyde. Formaldehyde emissions were increased by the DPF for ULSD and B20. PMID:20886845
NASA Astrophysics Data System (ADS)
Bellini, Nicola; Gu, Yu; Amato, Lorenzo; Eaton, Shane; Cerullo, Giulio; Osellame, Roberto
2012-03-01
We report on the integration of a size-based three-dimensional filter, with micrometer-sized pores, in a commercial microfluidic chip. The filter is fabricated inside an already sealed microfluidic channel using the unique capabilities of two-photon polymerization. This direct-write technique enables integration of the filter by post-processing in a chip that has been fabricated by standard technologies. The filter is located at the intersection of two channels in order to control the amount of flow passing through the filter. Tests with a suspension of 3-ìm polystyrene spheres in a Rhodamine 6G solution show that 100% of the spheres are stopped, while the fluorescent molecules are transmitted through the filter. We demonstrate operation up to a period of 25 minutes without any evidence of clogging. Moreover, the filter can be cleaned and reused by reversing the flow.
Incorporating advanced language models into the P300 speller using particle filtering
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.
2015-08-01
Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subjectâ€™s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.
Particle Filter with Integrated Voice Activity Detection for Acoustic Source Tracking
NASA Astrophysics Data System (ADS)
Lehmann, Eric A.; Johansson, Anders M.
2006-12-01
In noisy and reverberant environments, the problem of acoustic source localisation and tracking (ASLT) using an array of microphones presents a number of challenging difficulties. One of the main issues when considering real-world situations involving human speakers is the temporally discontinuous nature of speech signals: the presence of silence gaps in the speech can easily misguide the tracking algorithm, even in practical environments with low to moderate noise and reverberation levels. A natural extension of currently available sound source tracking algorithms is the integration of a voice activity detection (VAD) scheme. We describe a new ASLT algorithm based on a particle filtering (PF) approach, where VAD measurements are fused within the statistical framework of the PF implementation. Tracking accuracy results for the proposed method is presented on the basis of synthetic audio samples generated with the image method, whereas performance results obtained with a real-time implementation of the algorithm, and using real audio data recorded in a reverberant room, are published elsewhere. Compared to a previously proposed PF algorithm, the experimental results demonstrate the improved robustness of the method described in this work when tracking sources emitting real-world speech signals, which typically involve significant silence gaps between utterances.
Usefulness of Nonlinear Interpolation and Particle Filter in Zigbee Indoor Positioning
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Wu, Helei; UradziÅ„ski, Marcin
2014-12-01
The key to fingerprint positioning algorithm is establishing effective fingerprint information database based on different reference nodes of received signal strength indicator (RSSI). Traditional method is to set the location area calibration multiple information sampling points, and collection of a large number sample data what is very time consuming. With Zigbee sensor networks as platform, considering the influence of positioning signal interference, we proposed an improved algorithm of getting virtual database based on polynomial interpolation, while the pre-estimated result was disposed by particle filter. Experimental result shows that this method can generate a quick, simple fine-grained localization information database, and improve the positioning accuracy at the same time. Kluczem do algorytmu pozycjonowania wykorzystujÄ…cego metodÄ™ fi ngerprinting jest ustanowienie skutecznej bazy danych na podstawie informacji z radiowych nadajnikÃ³w referencyjnych przy wykorzystaniu wskaÅºnika mocy odbieranego sygnaÅ‚u (RSSI). Tradycyjna metoda oparta jest na przeprowadzeniu kalibracji obszaru lokalizacji na podstawie wielu punktÃ³w pomiarowych i otrzymaniu duÅ¼ej liczby prÃ³bek, co jest bardzo czasochÅ‚onne.
Qiu, Liping; Zhang, Shoubin; Wang, Guangwei; Du, Mao'an
2010-10-01
The performance and nitrification properties of three BAFs, with ceramic, zeolite and carbonate media, respectively, were investigated to evaluate the feasibility of employing these materials as biological aerated filter media. All three BAFs shown a promising COD and SS removal performance, while influent pH was 6.5-8.1, air-liquid ratio was 5:1 and HRT was 1.25-2.5 h, respectively. Ammonia removal in BAFs was inhibited when organic and ammonia nitrogen loading were increased, but promoted effectively with the increase pH value. Zeolite and carbonate were more suitable for nitrification than ceramic particle when influent pH below 6.5. It is feasible to employ these media in BAF and adequate bed volume has to be supplied to satisfy the requirement of removal COD, SS and ammonia nitrogen simultaneously in a biofilter. The carbonate with a strong buffer capacity is more suitable to treat the wastewater with variable or lower pH. PMID:20483593
Park, Jae Hong; Yoon, Ki Young; Na, Hyungjoo; Kim, Yang Seon; Hwang, Jungho; Kim, Jongbaeg; Yoon, Young Hun
2011-09-01
We grew multi-walled carbon nanotubes (MWCNTs) on a glass fiber air filter using thermal chemical vapor deposition (CVD) after the filter was catalytically activated with a spark discharge. After the CNT deposition, filtration and antibacterial tests were performed with the filters. Potassium chloride (KCl) particles (<1 ?m) were used as the test aerosol particles, and their number concentration was measured using a scanning mobility particle sizer. Antibacterial tests were performed using the colony counting method, and Escherichia coli (E. coli) was used as the test bacteria. The results showed that the CNT deposition increased the filtration efficiency of nano and submicron-sized particles, but did not increase the pressure drop across the filter. When a pristine glass fiber filter that had no CNTs was used, the particle filtration efficiencies at particle sizes under 30 nm and near 500 nm were 48.5% and 46.8%, respectively. However, the efficiencies increased to 64.3% and 60.2%, respectively, when the CNT-deposited filter was used. The reduction in the number of viable cells was determined by counting the colony forming units (CFU) of each test filter after contact with the cells. The pristine glass fiber filter was used as a control, and 83.7% of the E. coli were inactivated on the CNT-deposited filter. PMID:21767869
Multisource modeling of flattening filter free (FFF) beam and the optimization of model parameters
Cho, Woong; Kielar, Kayla N.; Mok, Ed; Xing Lei; Park, Jeong-Hoon; Jung, Won-Gyun; Suh, Tae-Suk
2011-04-15
Purpose: With the introduction of flattening filter free (FFF) linear accelerators to radiation oncology, new analytical source models for a FFF beam applicable to current treatment planning systems is needed. In this work, a multisource model for the FFF beam and the optimization of involved model parameters were designed. Methods: The model is based on a previous three source model proposed by Yang et al. [''A three-source model for the calculation of head scatter factors,'' Med. Phys. 29, 2024-2033 (2002)]. An off axis ratio (OAR) of photon fluence was introduced to the primary source term to generate cone shaped profiles. The parameters of the source model were determined from measured head scatter factors using a line search optimization technique. The OAR of the photon fluence was determined from a measured dose profile of a 40x40 cm{sup 2} field size with the same optimization technique, but a new method to acquire gradient terms for OARs was developed to enhance the speed of the optimization process. The improved model was validated with measured dose profiles from 3x3 to 40x40 cm{sup 2} field sizes at 6 and 10 MV from a TrueBeam STx linear accelerator. Furthermore, planar dose distributions for clinically used radiation fields were also calculated and compared to measurements using a 2D array detector using the gamma index method. Results: All dose values for the calculated profiles agreed with the measured dose profiles within 0.5% at 6 and 10 MV beams, except for some low dose regions for larger field sizes. A slight overestimation was seen in the lower penumbra region near the field edge for the large field sizes by 1%-4%. The planar dose calculations showed comparable passing rates (>98%) when the criterion of the gamma index method was selected to be 3%/3 mm. Conclusions: The developed source model showed good agreements between measured and calculated dose distributions. The model is easily applicable to any other linear accelerator using FFF beams as the required data include only the measured PDD, dose profiles, and output factors for various field sizes, which are easily acquired during conventional beam commissioning process.
Generating Optimal Initial Conditions for Smoothed Particle Hydrodynamics Simulations
NASA Astrophysics Data System (ADS)
Diehl, S.; Rockefeller, G.; Fryer, C. L.; Riethmiller, D.; Statler, T. S.
2015-12-01
We review existing smoothed particle hydrodynamics setup methods and outline their advantages, limitations, and drawbacks. We present a new method for constructing initial conditions for smoothed particle hydrodynamics simulations, which may also be of interest for N-body simulations, and demonstrate this method on a number of applications. This new method is inspired by adaptive binning techniques using weighted Voronoi tessellations. Particles are placed and iteratively moved based on their proximity to neighbouring particles and the desired spatial resolution. This new method can satisfy arbitrarily complex spatial resolution requirements.
Gupta, A.; Biswas, P. ); Monson, P.R. ); Novick, V.J. )
1993-07-01
The effect of humidity, particle hygroscopicity, and size on the mass loading capacity of glass fiber high efficiency particulate air filters was studied. Above the deliquescent point, the pressure drop across the filter increased nonlinearly with areal loading density (mass collected/filtration area) of a NaCl aerosol, thus significantly reducing the mass loading capacity of the filter compared to dry hygroscopic or nonhygroscopic particle mass loadings. The specific cake resistance K[sub 2] was computed for different test conditions and used as a measure of the mass loading capacity. K[sub 2] was found to decrease with increasing humidity for nonhygroscopic aluminum oxide particles and for hygroscopic NaCl particles (at humidities below the deliquescent point). It is postulated that an increase in humidity leads to the formation of a more open particulate cake which lowers the pressure drop for a given mass loading. A formula for predicting K[sub 2] for lognormally distributed aerosols (parameters obtained from impactor data) was derived. The resistance factor, R, calculated using this formula was compared to the theoretical R calculated using the Rudnick-Happel expression. For the nonhygroscopic aluminum oxide, the agreement was good but for the hygroscopic sodium chloride, due to large variation in the cake porosity estimates, the agreement was poor. 17 refs., 6 figs., 3 tabs.
Gupta, A.
1992-09-01
The effect of humidity, particle hygroscopicity and size on the mass loading capacity of glass fiber HEPA filters has been studied. At humidifies above the deliquescent point, the pressure drop across the HEPA filter increased non-linearly with the areal loading density (mass collected/filtration area) of NaCl aerosol, thus significantly reducing the mass loading capacity of the filter compared to dry hygroscopic or non-hygroscopic particle mass loadings. The specific cake resistance, K{sub 2}, has been computed for different test conditions and used as a measure of the mass loading capacity. K. was found to decrease with increasing humidity for the non-hygroscopic aluminum oxide particles and the hygroscopic NaCl particles (at humidities below the deliquescent point). It is postulated that an increase in humidity leads to the formation of a more open particulate cake which lowers the pressure drop for a given mass loading. A formula for predicting K{sub 2} for lognormally distributed aerosols (parameters obtained from impactor data) is derived. The resistance factor, R, calculated using this formula was compared to the theoretical R calculated using the Rudnick-Happel expression. For the non-hygroscopic aluminum oxide the agreement was good but for the hygroscopic sodium chloride, due to large variation in the cake porosity estimates, the agreement was poor.
Gupta, A.
1992-01-01
The effect of humidity, particle hygroscopicity and size on the mass loading capacity of glass fiber HEPA filters has been studied. At humidifies above the deliquescent point, the pressure drop across the HEPA filter increased non-linearly with the areal loading density (mass collected/filtration area) of NaCl aerosol, thus significantly reducing the mass loading capacity of the filter compared to dry hygroscopic or non-hygroscopic particle mass loadings. The specific cake resistance, K{sub 2}, has been computed for different test conditions and used as a measure of the mass loading capacity. K. was found to decrease with increasing humidity for the non-hygroscopic aluminum oxide particles and the hygroscopic NaCl particles (at humidities below the deliquescent point). It is postulated that an increase in humidity leads to the formation of a more open particulate cake which lowers the pressure drop for a given mass loading. A formula for predicting K{sub 2} for lognormally distributed aerosols (parameters obtained from impactor data) is derived. The resistance factor, R, calculated using this formula was compared to the theoretical R calculated using the Rudnick-Happel expression. For the non-hygroscopic aluminum oxide the agreement was good but for the hygroscopic sodium chloride, due to large variation in the cake porosity estimates, the agreement was poor.
Optimal ensemble size of ensemble Kalman filter in sequential soil moisture data assimilation
NASA Astrophysics Data System (ADS)
Yin, Jifu; Zhan, Xiwu; Zheng, Youfei; Hain, Christopher R.; Liu, Jicheng; Fang, Li
2015-08-01
The ensemble Kalman filter (EnKF) has been extensively applied in sequential soil moisture data assimilation to improve the land surface model performance and in turn weather forecast capability. Usually, the ensemble size of EnKF is determined with limited sensitivity experiments. Thus, the optimal ensemble size may have never been reached. In this work, based on a series of mathematical derivations, we demonstrate that the maximum efficiency of the EnKF for assimilating observations into the models could be reached when the ensemble size is set to 12. Simulation experiments are designed in this study under ensemble size cases 2, 5, 12, 30, 50, 100, and 300 to support the mathematical derivations. All the simulations are conducted from 1 June to 30 September 2012 over southeast USA (from -90°W, 30°N to -80°W, 40°N) at 25 km resolution. We found that the simulations are perfectly consistent with the mathematical derivation. This optical ensemble size may have theoretical implications on the implementation of EnKF in other sequential data assimilation problems.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2005-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
Autostereoscopic display with 60 ray directions using LCD with optimized color filter layout
NASA Astrophysics Data System (ADS)
Koike, Takafumi; Oikawa, Michio; Utsugi, Kei; Kobayashi, Miho; Yamasaki, Masami
2007-02-01
We developed a mobile-size integral videography (IV) display that reproduces 60 ray directions. IV is an autostereoscopic video image technique based on integral photography (IP). The IV display consists of a 2-D display and a microlens array. The maximal spatial frequency (MSF) and the number of rays appear to be the most important factors in producing realistic autostereoscopic images. Lens pitch usually determines the MSF of IV displays. The lens pitch and pixel density of the 2-D display determine the number of rays it reproduces. There is a trade-off between the lens pitch and the pixel density. The shape of an elemental image determines the shape of the area of view. We developed an IV display based on the above correlationship. The IV display consists of a 5-inch 900-dpi liquid crystal display (LCD) and a microlens array. The IV display has 60 ray directions with 4 vertical rays and a maximum of 18 horizontal rays. We optimized the color filter on the LCD to reproduce 60 rays. The resolution of the display is 256x192, and the viewing angle is 30 degrees. These parameters are sufficient for mobile game use. Users can interact with the IV display by using a control pad.
Hafnium and neodymium isotope composition of seawater and filtered particles from the Southern Ocean
NASA Astrophysics Data System (ADS)
Stichel, T.; Frank, M.; Haley, B. A.; Rickli, J.; Venchiarutti, C.
2009-12-01
Radiogenic hafnium (Hf) and neodymium (Nd) isotopes have been used as tracers for past continental weathering regimes and ocean circulation. To date, however, there are only very few data available on dissolved Hf isotope compositions in present-day seawater and there is a complete lack of particulate data. During expedition ANTXXIV/3 (February to April 2008) we collected particulate samples (> 0.8 µm), which were obtained by filtrations of 270-700 liters of water. The samples were separated from the filters, completely dissolved, and purified for Nd and Hf isotope determination by TIMS and MC-ICPMS, respectively. In addition, we collected filtered (0.45 µm) seawater samples (20-120 liters) to determine the dissolved isotopic composition of Hf and Nd. The Hf isotope composition of the particulate fraction in the Drake Passage ranged from 0 to -28 ?Hf and is thus similar to that observed in core top sediments from the entire Southern Ocean in a previous study. The most unradiogenic and isotopically homogenous Hf isotope compositions in our study were found near the Antarctic Peninsula. Most of the stations north of the Southern Antarctic Circumpolar Front (SACC) show a large variation in ?Hf between 0 and -23 within the water column of one station and between the stations. The locations at which these Hf isotope compositions were measured are mostly far away from the potential source areas. Nd, in contrast, was nearly absent throughout the entire sample set and the only measurable ?Nd data ranged from 0 to -7, which is in good agreement with the sediment data in that area. The dissolved seawater isotopic compositions of both Hf and Nd show only minor variance (?Hf = 4.2 to 4.7 and ?Nd = -8.8 to -7.6, respectively). These patterns in Hf isotopes and the nearly complete absence of Nd indicates that the particulate fraction does not contain a lot of terrigeneous material but is almost entirely dominated by biogenic opal. The homogenous and relatively radiogenic Hf isotope values in the dissolved fraction are interpreted as a result of large scale water mass mixing whereas the highly unradiongenic values observed in the particles more likely represent scavenged Hf released from physical weathering on the Antarctic continent. Our data therefore suggest a high scavenging efficiency of dissolved Hf onto opal, which is not observed for Nd. These results imply that the Southern Ocean is an efficient sink for dissolved Hf resulting in a very short residence of Hf in the Southern Ocean.
Enhancing Speech Recognition Using Improved Particle Swarm Optimization Based Hidden Markov Model
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy. PMID:25478588
Optimized qualification protocol on particle cleanliness for EUV mask infrastructure
NASA Astrophysics Data System (ADS)
van der Donck, J. C. J.; Stortelder, J. K.; Derksen, G. B.
2011-11-01
With the market introduction of the NXE:3100, Extreme Ultra Violet Lithography (EUVL) enters a new stage. Now infrastructure in the wafer fabs must be prepared for new processes and new materials. Especially the infrastructure for masks poses a challenge. Because of the absence of a pellicle reticle front sides are exceptionally vulnerable to particles. It was also shown that particles on the backside of a reticle may cause tool down time. These effects set extreme requirements to the cleanliness level of the fab infrastructure for EUV masks. The cost of EUV masks justifies the use of equipment that is qualified on particle cleanliness. Until now equipment qualification on particle cleanliness have not been carried out with statistically based qualification procedures. Since we are dealing with extreme clean equipment the number of observed particles is expected to be very low. These particle levels can only be measured by repetitively cycling a mask substrate in the equipment. Recent work in the EUV AD-tool presents data on added particles during load/unload cycles, reported as number of Particles per Reticle Pass (PRP). In the interpretation of the data, variation by deposition statistics is not taken into account. In measurements with low numbers of added particles the standard deviation in PRP number can be large. An additional issue is that particles which are added in the routing outside the equipment may have a large impact on the testing result. The number mismatch between a single handling step outside the tool and the multiple cycling in the equipment makes accuracy of measurements rather complex. The low number of expected particles, the large variation in results and the combined effect of added particles inside and outside the equipment justifies putting good effort in making a test plan. Without a proper statistical background, tests may not be suitable for proving that equipment qualifies for the limiting cleanliness levels. Other risks are that a test may requires an unrealistic high testing effort or that equipment can only pass for a test when it meets unrealistic high cleanliness levels. TNO developed a testing model which enables setting up a qualification test on particle cleanliness for EUV mask infrastructure. It is based on particle deposition models with a Poisson statistics and an acceptance sampling test method. The test model combines the single contribution of the routing outside the equipment and contribution of multiple cycling in the equipment. This model enables designing a test with minimal testing effort that proves that equipment meets a required cleanliness level. Furthermore, it gives insight in other equipment requirements on reliability.
Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes
NASA Astrophysics Data System (ADS)
Sanal, M.; Kuloor, R.; Sagayaraj, M. J.
In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.
Barone, Teresa L; Storey, John Morse; Domingo, Norberto
2010-01-01
A field-aged, passive diesel particulate filter (DPF) employed in a school bus retrofit program was evaluated for emissions of particle mass and number concentration before, during and after regeneration. For the particle mass measurements, filter samples were collected for gravimetric analysis with a partial flow sampling system, which sampled proportionally to the exhaust flow. Total number concentration and number-size distributions were measured by a condensation particle counter and scanning mobility particle sizer, respectively. The results of the evaluation show that the number concentration emissions decreased as the DPF became loaded with soot. However after soot removal by regeneration, the number concentration emissions were approximately 20 times greater, which suggests the importance of the soot layer in helping to trap particles. Contrary to the number concentration results, particle mass emissions decreased from 6 1 mg/hp-hr before regeneration to 3 2 mg/hp-hr after regeneration. This indicates that nanoparticles with diameter less than 50 nm may have been emitted after regeneration since these particles contribute little to the total mass. Overall, average particle emission reductions of 95% by mass and 10,000-fold by number concentration after four years of use provided evidence of the durability of a field-aged DPF. In contrast to previous reports for new DPFs in which elevated number concentrations occurred during the first 200 seconds of a transient cycle, the number concentration emissions were elevated during the second half of the heavy-duty federal test procedure when high speed was sustained. This information is relevant for the analysis of mechanisms by which particles are emitted from field-aged DPFs.
Barone, Teresa L; Storey, John M E; Domingo, Norberto
2010-08-01
A field-aged, passive diesel particulate filter (DPF) used in a school bus retrofit program was evaluated for emissions of particle mass and number concentration before, during, and after regeneration. For the particle mass measurements, filter samples were collected for gravimetric analysis with a partial flow sampling system, which sampled proportionally to the exhaust flow. A condensation particle counter and scanning mobility particle sizer measured total number concentration and number-size distributions, respectively. The results of the evaluation show that the number concentration emissions decreased as the DPF became loaded with soot. However, after soot removal by regeneration, the number concentration emissions were approximately 20 times greater, which suggests the importance of the soot layer in helping to trap particles. Contrary to the number concentration results, particle mass emissions decreased from 6 +/- 1 mg/hp-hr before regeneration to 3 +/- 2 mg/hp-hr after regeneration. This indicates that nanoparticles with diameters less than 50 nm may have been emitted after regeneration because these particles contribute little to the total mass. Overall, average particle emission reductions of 95% by mass and 10,000-fold by number concentration after 4 yr of use provided evidence of the durability of a field-aged DPF. In contrast to previous reports for new DPFs in which elevated number concentrations occurred during the first 200 sec of a transient cycle, the number concentration emissions were elevated during the second half of the heavy-duty Federal Test Procedure (FTP) when high speed was sustained. This information is relevant for the analysis of mechanisms by which particles are emitted from field-aged DPFs. PMID:20842937
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-10-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature. PMID:25285268
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-01-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature. PMID:25285268
NASA Astrophysics Data System (ADS)
Manoli, Gabriele; Rossi, Matteo; Pasetto, Damiano; Deiana, Rita; Ferraris, Stefano; Cassiani, Giorgio; Putti, Mario
2015-02-01
The modeling of unsaturated groundwater flow is affected by a high degree of uncertainty related to both measurement and model errors. Geophysical methods such as Electrical Resistivity Tomography (ERT) can provide useful indirect information on the hydrological processes occurring in the vadose zone. In this paper, we propose and test an iterated particle filter method to solve the coupled hydrogeophysical inverse problem. We focus on an infiltration test monitored by time-lapse ERT and modeled using Richards equation. The goal is to identify hydrological model parameters from ERT electrical potential measurements. Traditional uncoupled inversion relies on the solution of two sequential inverse problems, the first one applied to the ERT measurements, the second one to Richards equation. This approach does not ensure an accurate quantitative description of the physical state, typically violating mass balance. To avoid one of these two inversions and incorporate in the process more physical simulation constraints, we cast the problem within the framework of a SIR (Sequential Importance Resampling) data assimilation approach that uses a Richards equation solver to model the hydrological dynamics and a forward ERT simulator combined with Archie's law to serve as measurement model. ERT observations are then used to update the state of the system as well as to estimate the model parameters and their posterior distribution. The limitations of the traditional sequential Bayesian approach are investigated and an innovative iterative approach is proposed to estimate the model parameters with high accuracy. The numerical properties of the developed algorithm are verified on both homogeneous and heterogeneous synthetic test cases based on a real-world field experiment.
Manoli, Gabriele; Rossi, Matteo; Pasetto, Damiano; Deiana, Rita; Ferraris, Stefano; Cassiani, Giorgio; Putti, Mario
2015-02-15
The modeling of unsaturated groundwater flow is affected by a high degree of uncertainty related to both measurement and model errors. Geophysical methods such as Electrical Resistivity Tomography (ERT) can provide useful indirect information on the hydrological processes occurring in the vadose zone. In this paper, we propose and test an iterated particle filter method to solve the coupled hydrogeophysical inverse problem. We focus on an infiltration test monitored by time-lapse ERT and modeled using Richards equation. The goal is to identify hydrological model parameters from ERT electrical potential measurements. Traditional uncoupled inversion relies on the solution of two sequential inverse problems, the first one applied to the ERT measurements, the second one to Richards equation. This approach does not ensure an accurate quantitative description of the physical state, typically violating mass balance. To avoid one of these two inversions and incorporate in the process more physical simulation constraints, we cast the problem within the framework of a SIR (Sequential Importance Resampling) data assimilation approach that uses a Richards equation solver to model the hydrological dynamics and a forward ERT simulator combined with Archie's law to serve as measurement model. ERT observations are then used to update the state of the system as well as to estimate the model parameters and their posterior distribution. The limitations of the traditional sequential Bayesian approach are investigated and an innovative iterative approach is proposed to estimate the model parameters with high accuracy. The numerical properties of the developed algorithm are verified on both homogeneous and heterogeneous synthetic test cases based on a real-world field experiment.
Evacuation dynamic and exit optimization of a supermarket based on particle swarm optimization
NASA Astrophysics Data System (ADS)
Li, Lin; Yu, Zhonghai; Chen, Yang
2014-12-01
A modified particle swarm optimization algorithm is proposed in this paper to investigate the dynamic of pedestrian evacuation from a fire in a public building-a supermarket with multiple exits and configurations of counters. Two distinctive evacuation behaviours featured by the shortest-path strategy and the following-up strategy are simulated in the model, accounting for different categories of age and sex of the pedestrians along with the impact of the fire, including gases, heat and smoke. To examine the relationship among the progress of the overall evacuation and the layout and configuration of the site, a series of simulations are conducted in various settings: without a fire and with a fire at different locations. Those experiments reveal a general pattern of two-phase evacuation, i.e., a steep section and a flat section, in addition to the impact of the presence of multiple exits on the evacuation along with the geographic locations of the exits. For the study site, our simulations indicated the deficiency of the configuration and the current layout of this site in the process of evacuation and verified the availability of proposed solutions to resolve the deficiency. More specifically, for improvement of the effectiveness of the evacuation from the site, adding an exit between Exit 6 and Exit 7 and expanding the corridor at the right side of Exit 7 would significantly reduce the evacuation time.
Hashim, Rathiah; Noor Elaiza, Abd Khalid; Irtaza, Aun
2014-01-01
One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF) coupled with support vector machine (SVM) has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO). The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations. PMID:25121136
Particle Swarm Optimization Algorithm for Optimizing Assignment of Blood in Blood Banking System
Olusanya, Micheal O.; Arasomwan, Martins A.; Adewumi, Aderemi O.
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.
Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
Imran, Muhammad; Hashim, Rathiah; Noor Elaiza, Abd Khalid; Irtaza, Aun
2014-01-01
One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF) coupled with support vector machine (SVM) has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO). The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations. PMID:25121136
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
NASA Astrophysics Data System (ADS)
Walker, Eric; Rayman, Sean; White, Ralph E.
2015-08-01
A particle filter (PF) is shown to be more accurate than non-linear least squares (NLLS) and an unscented Kalman filter (UKF) for predicting the remaining useful life (RUL) and time until end of discharge voltage (EODV) of a Lithium-ion battery. The three algorithms, i.e. PF, UKF, and NLLS track four states with correct initial estimates of the states and 5% variation on the initial state estimates. The four states are data-driven, equivalent circuit properties or Lithium concentrations and electroactive surface areas depending on the model. The more accurate prediction performance of PF over NLLS and UKF is reported for three Lithium-ion battery models: a data-driven empirical model, an equivalent circuit model, and a physics-based single particle model.
Optimization of HTR fuel design to reduce fuel particle failures
Boer, B.; Kloosterman, J. L.; Ougouag, A. M.
2006-07-01
In this paper, an attempt is made to formulate criteria that can be used in the redesign of HTR fuel. A simplified fuel performance model is setup to calculate the fuel particle failure probability as a function of the TRISO particle design and the particle packing fraction. These models require knowledge of the fast neutron dose, the fuel burnup level, and the fuel temperature. In this paper, a neutronic, thermal-hydraulic and burnup calculations for the PBMR 400 MWth design are used to provide the fuel performance model with the required data. It was found that the failure impact increases considerably with increasing number of particles and reactor operating temperature, but decreases with a larger buffer layer. (authors)
Luo, Yingting; Zhu, Yunmin; Luo, Dandan; Zhou, Jie; Song, Enbin; Wang, Donghua
2008-01-01
This paper proposes a new distributed Kalman filtering fusion with random state transition and measurement matrices, i.e., random parameter matrices Kalman filtering. It is proved that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements; therefore, it achieves the best performance. More importantly, this result can be applied to Kalman filtering with uncertain observations including the measurement with a false alarm probability as a special case, as well as, randomly variant dynamic systems with multiple models. Numerical examples are given which support our analysis and show significant performance loss of ignoring the randomness of the parameter matrices.
Ashbaugh, Lowell L; Eldred, Robert A
2004-01-01
The extent of mass loss on Teflon filters caused by ammonium nitrate volatilization can be a substantial fraction of the measured particulate matter with an aerodynamic diameter less than 2.5 microm (PM2.5) or 10 microm (PM10) mass and depends on where and when it was collected. There is no straightforward method to correct for the mass loss using routine monitoring data. In southern California during the California Acid Deposition Monitoring Program, 30-40% of the gravimetric PM2.5 mass was lost during summer daytime. Lower mass losses occurred at more remote locations. The estimated potential mass loss in the Interagency Monitoring of Protected Visual Environments network was consistent with the measured loss observed in California. The biased mass measurement implies that use of Federal Reference Method data for fine particles may lead to control strategies that are biased toward sources of fugitive dust, other primary particle emission sources, and stable secondary particles (e.g., sulfates). This analysis clearly supports the need for speciated analysis of samples collected in a manner that preserves volatile species. Finally, although there is loss of volatile nitrate (NO3-) from Teflon filters during sampling, the NO3- remaining after collection is quite stable. We found little loss of NO3- from Teflon filters after 2 hr under vacuum and 1 min of heating by a cyclotron proton beam. PMID:14871017
Wang, S L; Singer, M A
2009-07-13
The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.
Sallum, Loriz Francisco; Soares, Frederico Luis Felipe; Ardila, Jorge Armando; Carneiro, Renato Lajarim
2014-01-01
Supported silver nanoparticles on filter paper were synthesized using Tollens' reagent. Experimental designs were performed to obtain the highest SERS enhancement factor by study of the influence of the parameters: filter paper pretreatment, type of filter paper, reactants concentration, reaction time and temperature. To this end, fractional factorial and central composite designs were used in order to optimize the synthesis for quantification of nicotinamide in the presence of excipients in a commercial sample of cosmetic. The values achieved for the optimal condition were 150 mM of ammonium hydroxide, 50 mM of silver nitrate, 500 mM of glucose, 8 min for the reaction time, 45 °C temperature, pretreatment with ammonium hydroxide and quantitative filter paper (1-2 µm). Despite the variation of SERS intensity, it was possible to use an adapted method of internal standard to obtain a calibration curve with good precision. The coefficient of determination of the linear fit was 0.97. The method proposed in this work was capable of quantifying nicotinamide on a commercial cosmetic gel, at low concentration levels, with a relative error of 1.06% compared to the HPLC. SERS spectroscopy presents faster analyses than HPLC, also complex sample preparation and large amount of reactants are not necessary. PMID:24274308
Piehowski, Paul D.; Petyuk, Vladislav A.; Sandoval, John D.; Burnum, Kristin E.; Kiebel, Gary R.; Monroe, Matthew E.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.
2013-03-01
For bottom-up proteomics there are a wide variety of database searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection - referred to as STEPS - utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.
Characterization and optimization of acoustic filter performance by experimental design methodology.
Gorenflo, Volker M; Ritter, Joachim B; Aeschliman, Dana S; Drouin, Hans; Bowen, Bruce D; Piret, James M
2005-06-20
Acoustic cell filters operate at high separation efficiencies with minimal fouling and have provided a practical alternative for up to 200 L/d perfusion cultures. However, the operation of cell retention systems depends on several settings that should be adjusted depending on the cell concentration and perfusion rate. The impact of operating variables on the separation efficiency performance of a 10-L acoustic separator was characterized using a factorial design of experiments. For the recirculation mode of separator operation, bioreactor cell concentration, perfusion rate, power input, stop time and recirculation ratio were studied using a fractional factorial 2(5-1) design, augmented with axial and center point runs. One complete replicate of the experiment was carried out, consisting of 32 more runs, at 8 runs per day. Separation efficiency was the primary response and it was fitted by a second-order model using restricted maximum likelihood estimation. By backward elimination, the model equation for both experiments was reduced to 14 significant terms. The response surface model for the separation efficiency was tested using additional independent data to check the accuracy of its predictions, to explore robust operation ranges and to optimize separator performance. A recirculation ratio of 1.5 and a stop time of 2 s improved the separator performance over a wide range of separator operation. At power input of 5 W the broad range of robust high SE performance (95% or higher) was raised to over 8 L/d. The reproducible model testing results over a total period of 3 months illustrate both the stable separator performance and the applicability of the model developed to long-term perfusion cultures. PMID:15858795
Donner, RenÃ©; Menze, Bjoern H; Bischof, Horst; Langs, Georg
2013-12-01
The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates' weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450
NASA Astrophysics Data System (ADS)
Siade, A. J.; Prommer, H.; Welter, D.
2014-12-01
Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site. Multiple data types (e.g., hydrochemical, geophysical, tracer, temperature, etc.) were collected prior to, and during an injection trial. Visualizing the trade-off between the calibration of each data type has provided the means of identifying some model-structure deficiencies.
Disselkamp, Robert S.; Kelly, James F.; Sams, Robert L.; Anderson, Gordon A.
2002-09-01
Optical feedback to the laser source in tunable diode laser spectroscopy (TDLS) is known to create intensity modulation noise due to elatoning and optical feedback (i.e., multiplicative technical noise) that usually limits spectral signal-to-noise (S/N). The large technical noise often limits absorption spectroscopy to noise floors 100-fold greater than the Poisson shot noise limit due to fluctuations in the laser intensity. The high output powers generated from quantum cascade (QC) lasers, along with their high gain, makes these injection laser systems especially susceptible to technical noise. In this article we discuss a method of using optimal filtering to reduce technical noise. We have observed S/N enhancements ranging from {approx}20% to a factor of {approx}50. The degree to which optimal filtering will enhance S/N depends on the similarity between the Fourier components of the technical noise and those of the signal, with lower S/N enhancements observed for more similar Fourier decompositions of the signal and technical noise. We also examine the linearity of optimal filtered spectra for both time and intensity. This was accomplished by creating a synthetic spectrum for the species being studied (CH4, N2O, CO2, H2O in ambient air) utilizing line-positions and line-widths with an assumed Voight-profile from a previous database (HITRAN). Agreement better than 0.036% in wavenumber, and 1.64% in intensity (up to a 260-fold intensity ratio employed), was observed. Our results suggest that rapid ex post facto digital optimal filtering can be used to enhance S/N for routine trace gas detection.
NASA Astrophysics Data System (ADS)
Disselkamp, R. S.; Kelly, J. F.; Sams, R. L.; Anderson, G. A.
Optical feedback to the laser source in tunable diode laser spectroscopy (TDLS) is known to create intensity modulation noise due to elatoning and optical feedback (i.e. multiplicative technical noise) that usually limits spectral signal-to-noise (S/N). The large technical noise often limits absorption spectroscopy to noise floors 100-fold greater than the Poisson shot noise limit due to fluctuations in the laser intensity. The high output powers generated from quantum cascade (QC) lasers, along with their high gain, makes these injection laser systems especially susceptible to technical noise. In this article we discuss a method of using optimal filtering to reduce technical noise. We have observed S/N enhancements ranging from 20% to a factor of 50. The degree to which optimal filtering enhances S/N depends on the similarity between the Fourier components of the technical noise and those of the signal, with lower S/N enhancements observed for more similar Fourier decompositions of the signal and technical noise. We also examine the linearity of optimal filtered spectra in both time and intensity. This was accomplished by creating a synthetic spectrum for the species being studied (CH4, N2O, CO2 and H2O in ambient air) utilizing line positions and linewidths with an assumed Voigt profile from a commercial database (HITRAN). Agreement better than 0.036% in wavenumber and 1.64% in intensity (up to a 260-fold intensity ratio employed) was observed. Our results suggest that rapid ex post facto digital optimal filtering can be used to enhance S/N for routine trace gas detection.
Saito, Masatoshi
2007-11-15
Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy app