Piehowski, Paul D.; Petyuk, Vladislav A.; Sandoval, John D.; Burnum, Kristin E.; Kiebel, Gary R.; Monroe, Matthew E.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.
2013-03-01
For bottom-up proteomics there are a wide variety of database searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection - referred to as STEPS - utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.
A stochastic gradient adaptive filter with gradient adaptive step size
V. John Mathews; Zhenhua Xie
1993-01-01
The step size of this adaptive filter is changed according to a gradient descent algorithm designed to reduce the squared estimation error during each iteration. An approximate analysis of the performance of the adaptive filter when its inputs are zero mean, white, and Gaussian noise and the set of optimal coefficients are time varying according to a random-walk model is
OPTIMIZATION OF ADVANCED FILTER SYSTEMS
R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert
1998-04-30
Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based on the issues identified. The two advanced barrier filter systems have been found to have the potential to be significantly more reliable and less expensive to operate than standard ceramic candle filter system designs. Their key development requirements are the assessment of the design and manufacturing feasibility of the ceramic filter elements, and the small-scale demonstration of their conceptual reliability and availability merits.
Optimal proposal densities for particle filters
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2012-04-01
Most data-assimilation problems in the geosciences are of very large dimension and nonlinear, either through nonlinear models, and/or through nonlinear observation operators. Most present-day data-assimilation methods for large-dimensional problems are based on linearisations, such as (Ensemble) Kalman filters and variational methods like 4DVar. There is a growing need for fully nonlinear data-assimilation methods, and particle filters could in principle serve this goal. However, standard particle filters are notoriously inefficient in that they typically need millions or more model runs to represent the posterior pdf. It is easy to show that this large number is related to the number of independent observations that determine the weighting of the particles via the likelihood. These weights typically vary enormously, such that e.g. a weighted mean is effectively represented via one or a few particles. This problem has been reduced to some extend by using proposal densities that bring the particles closer to observations, and as such reduce the variance in the weights of particles. However, even if the so-called 'optimal proposal density' is used, in which the proposal density takes into account the future observations, the variance in the weights is so large that an astronomical number of particles is needed for any real-sized problem. In this presentation we discuss new proposal densities that solve this weight degeneracy problem. The idea is simply that the proposal densities can be used not only to bring particles close to observations, but also to ensure that the weights of the particles are very similar. Two examples of such proposal densities are discussed, the equivalent-weights particle filter, and the new Gaussian-peak particle filter. The first scheme determines a target weight at the last time step before the observations come in, and moves the particles such that each obtains a weight very close to that target weight. The second new scheme slightly perturbs the particles such that while sampled from a very narrow pdf, their weights are determined from a very broad pdf, ensuring that the weights are nearly equal. The methods are implemented and tested on a high-dimensional highly nonlinear one-dimensional problem and compared to the standard particle filter and the so-called 'optimal proposal density' scheme. It is shown that both the standard particle filter and the 'optimal proposal density' scheme are degenerate, while both new schemes properly represent the posterior pdf using a very small number of particles. These results show that particle filters can be made to work for real geophysical problems when the extra freedom in the proposal density related to the weights of the particles is explored.
Synthesis of optimal detail-restoring stack filters for image processing
Bing Zeng; Hongbing Zhou; YrjZi Neuvo
1991-01-01
A two-step method is presented to synthesize optimal stack filters under the mean absolute error (MAE) criterion. First, the probabilities needed in the optimal filter design are estimated based on images. Second, the linear program (LP) required for finding the best filter is avoided by a `reasonably good' suboptimal routine which only involves data comparisons. A sufficient condition under which
Stochastic Gradient Adaptive Step Size Algorithms for Adaptive Filtering \\Lambda
Douglas, Scott C.
Stochastic Gradient Adaptive Step Size Algorithms for Adaptive Filtering \\Lambda Scott C. Douglas and V. John Mathews Department of Electrical Engineering University of Utah Salt Lake City, Utah 84112 Abstract-- In this paper, we provide an overview of adaptive filtering algorithms that employ gradient
OPTIMIZATION OF ADVANCED FILTER SYSTEMS
R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar
2002-06-30
Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program through cold flow and high-temperature testing. The Blasch, mullite-bonded alumina sheet filter element is the only candidate currently approaching qualification for demonstration, although this oxide-based, monolithic sheet filter element may be restricted to operating temperatures of 538 C (1000 F) or less. Many other types of ceramic and intermetallic sheet filter elements could be fabricated. The estimated capital cost of the sheet filter system is comparable to the capital cost of the standard candle filter system, although this cost estimate is very uncertain because the commercial price of sheet filter element manufacturing has not been established. The development of the sheet filter system could result in a higher reliability and availability than the standard candle filter system, but not as high as that of the inverted candle filter system. The sheet filter system has not reached the same level of development as the inverted candle filter system, and it will require more design development, filter element fabrication development, small-scale testing and evaluation before larger-scale testing could be recommended.
Particle Filter with Swarm Move for Optimization
Yang, Shengxiang
method in particle swarm optimization (PSO). In this way, the PSO update equation is treated the ability of PSO in searching the optimal position can be embedded into the particle filter optimization in both convergence speed and final fitness in comparison with the PSO algorithm over a set of standard
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Steps Toward Optimal Competitive Scheduling
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen
2006-01-01
This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum ofsers preferences, when the preference values are private information of the users. We bypass this
Optimal multiobjective design of digital filters using spiral optimization technique.
Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid
2013-01-01
The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108
Mandal, J K
2012-01-01
In this paper a novel approach for de noising images corrupted by random valued impulses has been proposed. Noise suppression is done in two steps. The detection of noisy pixels is done using all neighbor directional weighted pixels (ANDWP) in the 5 x 5 window. The filtering scheme is based on minimum variance of the four directional pixels. In this approach, relatively recent category of stochastic global optimization technique i.e., particle swarm optimization (PSO) has also been used for searching the parameters of detection and filtering operators required for optimal performance. Results obtained shows better de noising and preservation of fine details for highly corrupted images.
On Optimal Infinite Impulse Response Edge Detection Filters
Sudeep Sarkar; Kim L. Boyer
1991-01-01
The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated
Optimal stack filtering and classical Bayes decision
B. Zeng; M. Gabbouj; Y. Neuvo
1991-01-01
Optimal stack filtering under the mean absolute error (MAE) criterion is studied. It is first shown that this problem is equivalent to the classical a priori Bayes minimum-cost decision. Generally, a linear program (LP) with O(b2b) variables and constraints (b is the window width) is required for finding the best filter. Instead, the authors develop a suboptimal routine which renders
Optimization Integrator for Large Time Steps.
Gast, Theodore F; Schroeder, Craig; Stomakhin, Alexey; Jiang, Chenfanfu; Teran, Joseph M
2015-10-01
Practical time steps in today's state-of-the-art simulators typically rely on Newton's method to solve large systems of nonlinear equations. In practice, this works well for small time steps but is unreliable at large time steps at or near the frame rate, particularly for difficult or stiff simulations. We show that recasting backward Euler as a minimization problem allows Newton's method to be stabilized by standard optimization techniques with some novel improvements of our own. The resulting solver is capable of solving even the toughest simulations at the [Formula: see text] frame rate and beyond. We show how simple collisions can be incorporated directly into the solver through constrained minimization without sacrificing efficiency. We also present novel penalty collision formulations for self collisions and collisions against scripted bodies designed for the unique demands of this solver. Finally, we show that these techniques improve the behavior of Material Point Method (MPM) simulations by recasting it as an optimization problem. PMID:26357249
Optimal time step for incompressible SPH
NASA Astrophysics Data System (ADS)
Violeau, Damien; Leroy, Agnès
2015-05-01
A classical incompressible algorithm for Smoothed Particle Hydrodynamics (ISPH) is analyzed in terms of critical time step for numerical stability. For this purpose, a theoretical linear stability analysis is conducted for unbounded homogeneous flows, leading to an analytical formula for the maximum CFL (Courant-Friedrichs-Lewy) number as a function of the Fourier number. This gives the maximum time step as a function of the fluid viscosity, the flow velocity scale and the SPH discretization size (kernel standard deviation). Importantly, the maximum CFL number at large Reynolds number appears twice smaller than with the traditional Weakly Compressible (WCSPH) approach. As a consequence, the optimal time step for ISPH is only five times larger than with WCSPH. The theory agrees very well with numerical data for two usual kernels in a 2-D periodic flow. On the other hand, numerical experiments in a plane Poiseuille flow show that the theory overestimates the maximum allowed time step for small Reynolds numbers.
FIR Filter Design via Spectral Factorization and Convex Optimization 1 FIR Filter Design via UCSB 10 24 97 FIR Filter Design via Spectral Factorization and Convex Optimization 2 Outline Convex Spectral factorization methods Discretization #12;FIR Filter Design via Spectral Factorization and Convex
Optimization of stack filters based on mirrored threshold decomposition
José L. Paredes; Gonzalo R. Arce
2001-01-01
An adaptive optimization algorithm for the design of a new class of stack filters is presented. Unlike stack smoothers, this new class of stack filters, based on mirrored threshold decomposition, has been empowered not only with lowpass filtering characteristics but with bandpass and highpass filtering characteristics as well. Therefore, these filters can be effectively used in applications where frequency selection
Design of Optimal Stack Filter Under MAE Criterion
Win-long Lee; Kuo-chin Fan; Zhi-ming Chen
1997-01-01
A deterministic algorithm is proposed to design the optimal stack filter. The proposed algorithm can generate the optimal stack filter in one second for a window size of 9 and it can still generate the optimal stack filter for a window size of 21 although it takes about 4 hours. Experimental results reveal the feasibility and efficiency of the proposed
A SIMULATION-BASED OPTIMIZATION APPROACH TO POLYMER EXTRUSION FILTER
Jenkins, Lea
A SIMULATION-BASED OPTIMIZATION APPROACH TO POLYMER EXTRUSION FILTER DESIGN K.R. Fowler1 S.M. La methods for finding optimal parameters for the filter such that its lifetime is maximized, while placing model that describes the deposition of debris particles in the filter. Optimization algorithms are used
Eroglu, Abdullah
2010-01-01
Triple band microstrip tri-section bandpass filter using stepped impedance resonators (SIRs) is designed, simulated, built, and measured using hair pin structure. The complete design procedure is given from analytical stage to implementation stage with details The coupling between SIRs is investigated for the first time in detail by studying their effect on the filter characteristics including bandwidth, and attenuation to optimize the filter perfomance. The simulation of the filler is performed using method of moment based 2.5D planar electromagnetic simulator The filter is then implemented on RO4003 material and measured The simulation, and measured results are compared and found to be my close. The effect of coupling on the filter performance is then investigated using electromagnetic simulator It is shown that the coupling effect between SIRs can be used as a design knob to obtain a bandpass Idler with a better performance jar the desired frequency band using the proposed filter topology The results of this work can used in wireless communication systems where multiple frequency bandy are needed
Design of optimal stack filters under the MAE criterion
Win-Long Lee; Kuo-Chin Fan; Zhi-Ming Chen
1999-01-01
The design of optimal stack filters under the MAE criterion is addressed in this paper. In our work, the Hasse diagram is adopted to represent the positive Boolean functions to solve the optimization problem. After problem transformation, the finding of the optimal stack filter is equivalent to the finding of the optimal on-set such that the total cost of the
Optimal digital filtering for tremor suppression.
Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R
2000-05-01
Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:¿tremor-suppression.com. PMID:10851810
Finding of optimal stack filter by graphic searching methods
Chin-Chuan Han; Kuo-Chin Fan
1997-01-01
An efficient process to filter noise via an optimal stack filter is proposed. The graphic search based techniques are employed to speed up the finding of the optimal stack filter. Experimental results and performance evaluation are demonstrated to show the efficiency of our proposed method
Optimal parallel stack filtering under the mean absolute error criterion
Bing Zeng; Yrjö Neuvo
1994-01-01
The authors extend the configuration of stack filtering to develop a new class of stack-type filters called parallel stack filters (PSFs). As a basis for the parallel stack filtering, the block threshold decomposition (BTD) is introduced, and its properties are investigated. The design of optimal PSHs under the mean absolute error (MAE) criterion is shown to be similar to the
Optimal filters with heuristic 1-norm sparsity constraints
NASA Astrophysics Data System (ADS)
Yazdani, Mehrdad; Hecht-Nielsen, Robert
2011-09-01
We present a design method for sparse optimal Finite Impulse Response (FIR) filters that improve the visibility of a desired stochastic signal corrupted with white Gaussian noise. We emphasize that the filters we seek are of high-order but sparse, thus significantly reducing computational complexity. An optimal FIR filter for the estimation of a desired signal corrupted with white noise can be designed by maximizing the signal-to-noise ratio (SNR) of the filter output with the constraint that the magnitude (in 2-norm) of the FIR filter coefficients are set to unity.1, 2 This optimization problem is in essence maximizing the Rayleigh quotient and is thus equivalent to finding the eigenvector with the largest eigenvalue.3 While such filters are optimal, they are rarely sparse. To ensure sparsity, one must introduce a cardinality constraint in the optimization procedure. For high order filters such constraints are computationally burdensome due to the combinatorial search space. We relax the cardinality constraint by using the 1-norm approximation of the cardinality function. This is a relaxation heuristic similar to the recent sparse filter design work of Baran, Wei, and Oppenheim.4 The advantage of this relaxation heuristic is that the solutions tend to be sparse and the optimization procedure reduces to a convex program, thus ensuring global optimality. In addition to our proposed optimization procedure for deriving sparse FIR filters, we show examples where sparse high-order filters significantly perform better than low-order filters, whereas complexity is reduced by a factor of 10.
Finding of optimal stack filter by using graphic searching methods
Zhi-ming Chen; Chin-Chuan Hunt; Kuo-Chin Fant
1995-01-01
A graphic searching algorithm is proposed to find the optimal stack filter. The search of the optimal stack filter is reduced to a problem of finding a minimal path from the root node to the optimal node in the error cone graph (ECG). Two graphic searching techniques, the greedy and A* algorithms, are applied to avoid the searching an extremely
Evolutionary Gabor Filter Optimization with Application to Vehicle Detection
Bebis, George
1 Evolutionary Gabor Filter Optimization with Application to Vehicle Detection Zehang Sun1 , George of Gabor filters in pattern classification, their design and selection have been mostly done on a trial and error basis. Existing techniques are either only suitable for a small number of filters or less problem
Metal finishing wastewater pressure filter optimization
Norford, S.W.; Diener, G.A.; Martin, H.L.
1992-01-01
The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.
Metal finishing wastewater pressure filter optimization
Norford, S.W.; Diener, G.A.; Martin, H.L.
1992-12-31
The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.
Optimal filters for detecting cosmic bubble collisions
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Johnson, M. C.; Peiris, H. V.
2012-05-01
A number of well-motivated extensions of the ?CDM concordance cosmological model postulate the existence of a population of sources embedded in the cosmic microwave background. One such example is the signature of cosmic bubble collisions which arise in models of eternal inflation. The most unambiguous way to test these scenarios is to evaluate the full posterior probability distribution of the global parameters defining the theory; however, a direct evaluation is computationally impractical on large datasets, such as those obtained by the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck. A method to approximate the full posterior has been developed recently, which requires as an input a set of candidate sources which are most likely to give the largest contribution to the likelihood. In this article, we present an improved algorithm for detecting candidate sources using optimal filters, and apply it to detect candidate bubble collision signatures in WMAP 7-year observations. We show both theoretically and through simulations that this algorithm provides an enhancement in sensitivity over previous methods by a factor of approximately two. Moreover, no other filter-based approach can provide a superior enhancement of these signatures. Applying our algorithm to WMAP 7-year observations, we detect eight new candidate bubble collision signatures for follow-up analysis.
Optimization of photon correlations by frequency filtering
NASA Astrophysics Data System (ADS)
González-Tudela, Alejandro; del Valle, Elena; Laussy, Fabrice P.
2015-04-01
Photon correlations are a cornerstone of quantum optics. Recent works [E. del Valle, New J. Phys. 15, 025019 (2013), 10.1088/1367-2630/15/2/025019; A. Gonzalez-Tudela et al., New J. Phys. 15, 033036 (2013), 10.1088/1367-2630/15/3/033036; C. Sanchez Muñoz et al., Phys. Rev. A 90, 052111 (2014), 10.1103/PhysRevA.90.052111] have shown that by keeping track of the frequency of the photons, rich landscapes of correlations are revealed. Stronger correlations are usually found where the system emission is weak. Here, we characterize both the strength and signal of such correlations, through the introduction of the "frequency-resolved Mandel parameter." We study a plethora of nonlinear quantum systems, showing how one can substantially optimize correlations by combining parameters such as pumping, filtering windows and time delay.
Optimal filtering of the LISA data
Andrzej Krolak; Massimo Tinto; Michele Vallisneri
2007-07-19
The LISA time-delay-interferometry responses to a gravitational-wave signal are rewritten in a form that accounts for the motion of the LISA constellation around the Sun; the responses are given in closed analytic forms valid for any frequency in the band accessible to LISA. We then present a complete procedure, based on the principle of maximum likelihood, to search for stellar-mass binary systems in the LISA data. We define the required optimal filters, the amplitude-maximized detection statistic (analogous to the F statistic used in pulsar searches with ground-based interferometers), and discuss the false-alarm and detection probabilities. We test the procedure in numerical simulations of gravitational-wave detection.
Adaptive Statistical Optimization Techniques for Firewall Packet Filtering
Hazem Hamed; Adel El-atawy; Ehab Al-shaer
2006-01-01
Packet filtering plays a critical role in the perfor- mance of many network devices such as firewalls, IPSec gateways, DiffServ and QoS routers. A tremendous amount of research was proposed to optimize packet filters. However, most of the related works use deterministic techniques and do not exploit the traffic characteristics in their optimization schemes. In addition, most packet classifiers give
Optimal correlation filters for implementation on deformable mirror devices
NASA Astrophysics Data System (ADS)
Vijaya Kumar, B. V. K.; Carlson, Daniel
1991-11-01
A systematic procedure is presented for designing optimal correlation filters for implementation on deformable mirror devices (DMDs) exhibiting cross-coupled amplitude and phase characteristics. The utility of the algorithm for designing such filters is illustrated using five different device characteristics: phase-only filter, a binary phase-only filter, a diagonal line characteristic, a DMD zeroth-order characteristic, and a DMD first-order characteristic. Results are also presented regarding the signal-to-noise ratio and peak-to-correlation energy obtainable using these filters. The performance achievable using DMD type characteristics was found to be close to that of phase-only filter.
Application of Optimization Technique for GPS Navigation Kalman Filter Adaptation
Dah-jing Jwo; Shun-chieh Chang
2008-01-01
The position-velocity (PV) process model can be applied to the GPS Kalman filter adequately when navigating a vehicle with\\u000a constant speed. However, when an abrupt acceleration motion occurs, the filtering solution becomes very poor or even diverges.\\u000a To avoid the limitation of the Kalman filter, the particle swarm optimization can be incorporated into the filtering mechanism\\u000a as dynamic model corrector.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
Optimization-based tuning of LPV fault detection filters for civil transport aircraft
NASA Astrophysics Data System (ADS)
Ossmann, D.; Varga, A.
2013-12-01
In this paper, a two-step optimal synthesis approach of robust fault detection (FD) filters for the model based diagnosis of sensor faults for an augmented civil aircraft is suggested. In the first step, a direct analytic synthesis of a linear parameter varying (LPV) FD filter is performed for the open-loop aircraft using an extension of the nullspace based synthesis method to LPV systems. In the second step, a multiobjective optimization problem is solved for the optimal tuning of the LPV detector parameters to ensure satisfactory FD performance for the augmented nonlinear closed-loop aircraft. Worst-case global search has been employed to assess the robustness of the fault detection system in the presence of aerodynamics uncertainties and estimation errors in the aircraft parameters. An application of the proposed method is presented for the detection of failures in the angle-of-attack sensor.
Design of passive filter circuit based on robust optimization
NASA Astrophysics Data System (ADS)
Zhao, Hong; Chen, Gang
2013-03-01
In view on this change of filter performance by the deviation of circuit component parameter values from its design values, the concept of robust optimization design for the passive filter circuit is presented. The function, that is to minimize the ripples and maximal variations of system performance is chosen as the objective function. The optimization strategy by combining random direction searching method with compound optimum was adopted for solving this nonlinear programming problem with two-level optimization. This theory is used on an 800MHz transmitter bandpass filter circuit. By comparing with original design and conventional optimization, passband performance of the robust optimized circuit is more flat and its fluctuation is more small when component parameters change within their rated tolerance. So filter performance of the circuit is improved, and the method mentioned in this paper is effective and superior.
On the Distance to Optimality of the Geometric Approximate Minimum-Energy Attitude Filter
Trumpf, Jochen
On the Distance to Optimality of the Geometric Approximate Minimum-Energy Attitude Filter Mohammad-optimality of the recent geometric approximate minimum-energy (GAME) filter, an attitude filter for estimation on the rotation group SO(3). The GAME filter approximates the minimum-energy (optimal) filtering solution
Optimal filter bandwidth for pulse oximetry
NASA Astrophysics Data System (ADS)
Stuban, Norbert; Niwayama, Masatsugu
2012-10-01
Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.
Optimal filter systems for photometric redshift estimation
N. Benitez; M. Moles; J. A. L. Aguerri; E. Alfaro; T. Broadhurst; J. Cabrera; F. J. Castander; J. Cepa; M. Cervino; D. Cristobal-Hornillos; A. Fernandez-Soto; R. M. Gonzalez-Delgado; L. Infante; I. Marquez; V. J. Martinez; J. Masegosa; A. Del Olmo; J. Perea; F. Prada; J. M. Quintana; S. F. Sanchez
2008-12-18
In the next years, several cosmological surveys will rely on imaging data to estimate the redshift of galaxies, using traditional filter systems with 4-5 optical broad bands; narrower filters improve the spectral resolution, but strongly reduce the total system throughput. We explore how photometric redshift performance depends on the number of filters n_f, characterizing the survey depth through the fraction of galaxies with unambiguous redshift estimates. For a combination of total exposure time and telescope imaging area of 270 hrs m^2, 4-5 filter systems perform significantly worse, both in completeness depth and precision, than systems with n_f >= 8 filters. Our results suggest that for low n_f, the color-redshift degeneracies overwhelm the improvements in photometric depth, and that even at higher n_f, the effective photometric redshift depth decreases much more slowly with filter width than naively expected from the reduction in S/N. Adding near-IR observations improves the performance of low n_f systems, but still the system which maximizes the photometric redshift completeness is formed by 9 filters with logarithmically increasing bandwidth (constant resolution) and half-band overlap, reaching ~0.7 mag deeper, with 10% better redshift precision, than 4-5 filter systems. A system with 20 constant-width, non-overlapping filters reaches only ~0.1 mag shallower than 4-5 filter systems, but has a precision almost 3 times better, dz = 0.014(1+z) vs. dz = 0.042(1+z). We briefly discuss a practical implementation of such a photometric system: the ALHAMBRA survey.
Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design
Troncoso Romero, David Ernesto
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674
Franke, Felix; Quian Quiroga, Rodrigo; Hierlemann, Andreas; Obermayer, Klaus
2015-06-01
Spike sorting, i.e., the separation of the firing activity of different neurons from extracellular measurements, is a crucial but often error-prone step in the analysis of neuronal responses. Usually, three different problems have to be solved: the detection of spikes in the extracellular recordings, the estimation of the number of neurons and their prototypical (template) spike waveforms, and the assignment of individual spikes to those putative neurons. If the template spike waveforms are known, template matching can be used to solve the detection and classification problem. Here, we show that for the colored Gaussian noise case the optimal template matching is given by a form of linear filtering, which can be derived via linear discriminant analysis. This provides a Bayesian interpretation for the well-known matched filter output. Moreover, with this approach it is possible to compute a spike detection threshold analytically. The method can be implemented by a linear filter bank derived from the templates, and can be used for online spike sorting of multielectrode recordings. It may also be applicable to detection and classification problems of transient signals in general. Its application significantly decreases the error rate on two publicly available spike-sorting benchmark data sets in comparison to state-of-the-art template matching procedures. Finally, we explore the possibility to resolve overlapping spikes using the template matching outputs and show that they can be resolved with high accuracy. PMID:25652689
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Optimal digital filtering for tremor suppression
Juan G. Gonzalez; Edwin A. Heredia; Tariq Rahman; Kenneth E. Barner; Gonzalo R. Arce
2000-01-01
Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate
NASA Astrophysics Data System (ADS)
Baroncini, F.; Castelli, F.
2009-09-01
Data assimilation techniques based on Ensemble Filtering are widely regarded as the best approach in solving forecast and calibration problems in geophysics models. Often the implementation of statistical optimal techniques, like the Ensemble Kalman Filter, is unfeasible because of the large amount of replicas used in each time step of the model for updating the error covariance matrix. Therefore the sub optimal approach seems to be a more suitable choice. Various sub-optimal techniques were tested in atmospheric and oceanographic models, some of them are based on the detection of a "null space". Distributed Hydrologic Models differ from the other geo-fluid-dynamics models in some fundamental aspects that make complex to understanding the relative efficiency of the different suboptimal techniques. Those aspects include threshold processes , preferential trajectories for convection and diffusion, low observability of the main state variables and high parametric uncertainty. This research study is focused on such topics and explore them through some numerical experiments on an continuous hydrologic model, MOBIDIC. This model include both water mass balance and surface energy balance, so it's able to assimilate a wide variety of datasets like traditional hydrometric "on ground" measurements or land surface temperature retrieval from satellite. The experiments that we present concern to a basin of 700 kmq in center Italy, with hourly dataset on a 8 months period that includes both drought and flood events, in this first set of experiment we worked on a low spatial resolution version of the hydrologic model (3.2 km). A new Kalman Filter based algorithm is presented : this filter try to address the main challenges of hydrological modeling uncertainty. The proposed filter use in Forecast step a COFFEE (Complementary Orthogonal Filter For Efficient Ensembles) approach with a propagation of both deterministic and stochastic ensembles to improve robustness and convergence proprieties. After, through a P.O.D. Reduction from control theory, we compute a Reduced Order Forecast Covariance matrix . In analysis step the filter uses a LE (Local Ensemble) Kalman Filter approach. We modify the LE Kalman Filter assimilation scheme and we adapt its formulation to the P.O.D. Reduced sub-space propagated in forecast step. Through this, assimilation of observations is made only in the maximum covariance directions of the model error. Then the efficiency of this technique is weighed in term of hydrometric forecast accuracy in a preliminary convergence test of a synthetic rainfall event toward a real rain fall event.
Optimized Beam Sculpting with Generalized Fringe-Rate Filters
Parsons, Aaron R; Ali, Zaki S; Cheng, Carina
2015-01-01
We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer's fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduc...
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
Design of optimal correlation filters for hybrid vision systems
NASA Technical Reports Server (NTRS)
Rajan, Periasamy K.
1990-01-01
Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.
Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation
Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan
2013-01-01
This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869
Optimal Recursive Digital Filters for Active Bending Stabilization
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2013-01-01
In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.
AN ADAPTIVE PROJECTION ALGORITHM FOR MULTIRATE FILTER BANK OPTIMIZATION
Regalia, Phillip A.
the first step uses well-known Rayleigh quotient type algorithms (e.g., [6], [7]) to obtain an extremal eigenvector of the input autocorrelation matrix. The lossless filter is then adapted to fit one of its impulse, the algorithm aims to project the "error" in the eigenvector fit onto the orthogonal complement subspace
Optimization of filtering schemes for broadband astro-combs.
Chang, Guoqing; Li, Chih-Hao; Phillips, David F; Szentgyorgyi, Andrew; Walsworth, Ronald L; Kärtner, Franz X
2012-10-22
To realize a broadband, large-line-spacing astro-comb, suitable for wavelength calibration of astrophysical spectrographs, from a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three additional components: (1) one or more filter cavities to multiply the source-comb's repetition rate and thus line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highly nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb. In this paper we analyze the interplay of Fabry-Perot (FP) filter cavities with power amplifiers and nonlinear broadening fiber in the design of astro-combs optimized for radial-velocity (RV) calibration accuracy. We present analytic and numeric models and use them to evaluate a variety of FP filtering schemes (labeled as identical, co-prime, fraction-prime, and conjugate cavities), coupled to chirped-pulse amplification (CPA). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase RV error for spectrograph calibration. In general, filtering with two cavities prior to the CPA fiber amplifier outperforms an amplifier placed between the two cavities. In particular, filtering with conjugate cavities is able to provide <1 cm/s RV calibration error with >300 nm wavelength coverage. Such superior performance will facilitate the search for and characterization of Earth-like exoplanets, which requires <10 cm/s RV calibration error. PMID:23187265
Optimal Step Nonrigid ICP Algorithms for Surface Registration Brian Amberg
Vetter, Thomas
Optimal Step Nonrigid ICP Algorithms for Surface Registration Brian Amberg University of Basel University of Basel thomas.vetter@uni-basel.ch Abstract We show how to extend the ICP framework to nonrigid nonrigid ICP framework allows the use of different regularisations, as long as they have an adjustable
Broadband quasi-Chebyshev bandpass filters with multimode stepped-impedance resonators (SIRs)
Yi-Chyun Chiou; Jen-Tsai Kuo; Eisenhower Cheng
2006-01-01
Planar broadband bandpass filters of order up to 9 are synthesized based on the multimode property of stepped-impedance resonators (SIRs). Based on the transmission line theory, the modal frequencies of the SIRs are calculated based on the impedance and length ratios of its hi- and low-Z segments. In the synthesis, the SIR coupling schemes are determined by the split mode
Adaptive mesh optimization for improved one-step forming
NASA Astrophysics Data System (ADS)
Hu, Ping; Liu, Mingzeng; Li, Baojun; Zhang, Xiangkui; Shen, Guozhe
2013-05-01
To reduce the simulation time and improve solution accuracy, an adaptive mesh optimization method is proposed for one-step forming simulation of auto-body panels. For a given auto-body part model, the state variables of model, such as thickness, strain and stress, is firstly estimated through one-step inverse forming method. Incorporating the distribution of these physical properties and geometric characteristics of model, an adaptive node placement technique is presented, which implies that in the regions with high curvature or strain-rate a high mesh density is desirable and on the contrary it is sparse. Simultaneously, the features of the original mesh, such as welding point, a stiffener for reinforcement, forming lines or holes in the interior of model, are preserved in the process of mesh optimization. Finally, numerical examples are shown to demonstrate that the proposed optimization method here exhibits good performance.
Shi, Lei; Qin, Jia; Reif, Roberto; Wang, Ruikang K.
2013-01-01
Abstract. We propose a simple and optimized method for acquiring a wide velocity range of blood flow using Doppler optical microangiography. After characterizing the behavior of the scanner in the fast scan axis, a step-scanning protocol is developed by utilizing repeated A-scans at each step. Multiple velocity range images are obtained by the high-pass filtering and Doppler processing of complex signals between A-scans within each step with different time intervals. A phase variance mask is then employed to segment meaningful Doppler flow signals from noisy phase background. The technique is demonstrated by imaging in vivo mouse brain with skull left intact to provide bidirectional images of cerebral blood flow with high quality and wide velocity range. PMID:24165741
Abate, A; Pressello, M C; Benassi, M; Strigari, L
2009-12-01
The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning. PMID:19920309
NASA Astrophysics Data System (ADS)
Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.
2009-12-01
The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.
Na-Faraday rotation filtering: The optimal point
NASA Astrophysics Data System (ADS)
Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja
2014-10-01
Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing.
A Neural Network-Based Optimal Spatial Filter Design Method for Motor Imagery Classification
Yuksel, Ayhan; Olmez, Tamer
2015-01-01
In this study, a novel spatial filter design method is introduced. Spatial filtering is an important processing step for feature extraction in motor imagery-based brain-computer interfaces. This paper introduces a new motor imagery signal classification method combined with spatial filter optimization. We simultaneously train the spatial filter and the classifier using a neural network approach. The proposed spatial filter network (SFN) is composed of two layers: a spatial filtering layer and a classifier layer. These two layers are linked to each other with non-linear mapping functions. The proposed method addresses two shortcomings of the common spatial patterns (CSP) algorithm. First, CSP aims to maximize the between-classes variance while ignoring the minimization of within-classes variances. Consequently, the features obtained using the CSP method may have large within-classes variances. Second, the maximizing optimization function of CSP increases the classification accuracy indirectly because an independent classifier is used after the CSP method. With SFN, we aimed to maximize the between-classes variance while minimizing within-classes variances and simultaneously optimizing the spatial filter and the classifier. To classify motor imagery EEG signals, we modified the well-known feed-forward structure and derived forward and backward equations that correspond to the proposed structure. We tested our algorithm on simple toy data. Then, we compared the SFN with conventional CSP and its multi-class version, called one-versus-rest CSP, on two data sets from BCI competition III. The evaluation results demonstrate that SFN is a good alternative for classifying motor imagery EEG signals with increased classification accuracy. PMID:25933101
Woodside, C. Murray
and the software, which implies that the model should track changes in the system. A substantial theory of optimal Kalman Filtering to track the parameters of a simple queueing network model, in response to a step change a tracking model, and model-based decision-making for control. One level of the hierarchy is illustrated
Degeneracy, frequency response and filtering in IMRT optimization
NASA Astrophysics Data System (ADS)
Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D.; Promberger, Claus
2004-07-01
This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques.
Optimal color image restoration: Wiener filter and quaternion Fourier transform
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; Agaian, Sos S.
2015-03-01
In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.
Multidisciplinary Analysis and Optimization Generation 1 and Next Steps
NASA Technical Reports Server (NTRS)
Naiman, Cynthia Gutierrez
2008-01-01
The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.
FIR filter optimization for video processing on FPGAs
NASA Astrophysics Data System (ADS)
Kumm, Martin; Fanghänel, Diana; Möller, Konrad; Zipf, Peter; Meyer-Baese, Uwe
2013-12-01
Two-dimensional finite impulse response (FIR) filters are an important component in many image and video processing systems. The processing of complex video applications in real time requires high computational power, which can be provided using field programmable gate arrays (FPGAs) due to their inherent parallelism. The most resource-intensive components in computing FIR filters are the multiplications of the folding operation. This work proposes two optimization techniques for high-speed implementations of the required multiplications with the least possible number of FPGA components. Both methods use integer linear programming formulations which can be optimally solved by standard solvers. In the first method, a formulation for the pipelined multiple constant multiplication problem is presented. In the second method, also multiplication structures based on look-up tables are taken into account. Due to the low coefficient word size in video processing filters of typically 8 to 12 bits, an optimal solution is found for most of the filters in the benchmark used. A complexity reduction of 8.5% for a Xilinx Virtex 6 FPGA could be achieved compared to state-of-the-art heuristics.
A filter-based evolutionary algorithm for constrained optimization.
Clevenger, Lauren M.; Hart, William Eugene; Ferguson, Lauren Ann
2004-02-01
We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.
NASA Technical Reports Server (NTRS)
U-Yen, Kongpop; Wollack, Edward J.; Doiron, Terence; Papapolymerou, John; Laskar, Joy
2005-01-01
We propose an analytical design for a microstrip broadband spurious-suppression filter. The proposed design uses every section of the transmission lines as both a coupling and a spurious suppression element, which creates a very compact, planar filter. While a traditional filter length is greater than the multiple of the quarter wavelength at the center passband frequency (lambda(sub g)/4), the proposed filter length is less than (order n(Ssup th) + l)center dot lambda(sub g)/8. The filter s spurious response and physical dimension are controlled by the step impedance ratio (R) between two transmission line sections as a lambda(sub g)/4 resonator. The experimental result shows that, with R of 0.2, the out-of-band attenuation is greater than 40 dB; and the first spurious mode is shifted to more than 5 times the fundamental frequency. Moreover, it is the most compact planar filter design to date. The results also indicate a low in-band insertion loss.
An exact algorithm for optimal MAE stack filter design.
Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior
2007-02-01
We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly. PMID:17269638
Design and optimization of space-variant photonic crystal filters.
Rumpf, Raymond C; Mehta, Alok; Srinivasan, Pradeep; Johnson, Eric G
2007-08-10
A space-variant photonic crystal filter is designed and optimized that may be placed over a detector array to perform filtering functions tuned for each pixel. The photonic crystal is formed by etching arrays of holes through a multilayer stack of alternating high and low refractive index materials. Position of a narrow transmission notch within a wide reflection band is varied across the device aperture by adjusting the diameter of the holes. Numerical simulations are used to design and optimize the geometry of the photonic crystal. As a result of physics inherent in the etching process, the diameter of the holes reduces with depth, producing a taper. Optical performance was found to be sensitive to the taper, but a method for compensation was developed where film thickness is varied through the device. PMID:17694124
A multi-dimensional procedure for BNCT filter optimization
Lille, R.A.
1998-02-01
An initial version of an optimization code utilizing two-dimensional radiation transport methods has been completed. This code is capable of predicting material compositions of a beam tube-filter geometry which can be used in a boron neutron capture therapy treatment facility to improve the ratio of the average radiation dose in a brain tumor to that in the healthy tissue surrounding the tumor. The optimization algorithm employed by the code is very straightforward. After an estimate of the gradient of the dose ratio with respect to the nuclide densities in the beam tube-filter geometry is obtained, changes in the nuclide densities are made based on: (1) the magnitude and sign of the components of the dose ratio gradient, (2) the magnitude of the nuclide densities, (3) the upper and lower bound of each nuclide density, and (4) the linear constraint that the sum of the nuclide density fractions in each material zone be less than or equal to 1.0. A local optimal solution is assumed to be found when one of the following conditions is satisfied in every material zone: (1) the maximum positive component of the gradient corresponds to a nuclide at its maximum density and the sum of the density fractions equals 1.0 or, and (2) the positive and negative components of the gradient correspond to nuclides densities at their upper and lower bounds, respectively, and the remaining components of the gradient are sufficiently small. The optimization procedure has been applied to a beam tube-filter geometry coupled to a simple tumor-patient head model and an improvement of 50% in the dose ratio was obtained.
Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.
McMinn, Brian R
2013-11-01
Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. PMID:23796954
Stryker, Michael
Supplementary Discussion. Optimal filtering in a nonlinear system. The simple argument leading filter and ensemble are swapped, Fig. 3). The optimality argument (1) for a nonlinear system analyzing system, the redundancy reduction arguments predict40, 41, 47 that neural filters should completely remove
Optimal design of one-dimensional photonic crystal filters using minimax optimization approach.
Hassan, Abdel-Karim S O; Mohamed, Ahmed S A; Maghrabi, Mahmoud M T; Rafat, Nadia H
2015-02-20
In this paper, we introduce a simulation-driven optimization approach for achieving the optimal design of electromagnetic wave (EMW) filters consisting of one-dimensional (1D) multilayer photonic crystal (PC) structures. The PC layers' thicknesses and/or material types are considered as designable parameters. The optimal design problem is formulated as a minimax optimization problem that is entirely solved by making use of readily available software tools. The proposed approach allows for the consideration of problems of higher dimension than usually treated before. In addition, it can proceed starting from bad initial design points. The validity, flexibility, and efficiency of the proposed approach is demonstrated by applying it to obtain the optimal design of two practical examples. The first is (SiC/Ag/SiO(2))(N) wide bandpass optical filter operating in the visible range. Contrarily, the second example is (Ag/SiO(2))(N) EMW low pass spectral filter, working in the infrared range, which is used for enhancing the efficiency of thermophotovoltaic systems. The approach shows a good ability to converge to the optimal solution, for different design specifications, regardless of the starting design point. This ensures that the approach is robust and general enough to be applied for obtaining the optimal design of all 1D photonic crystals promising applications. PMID:25968205
Tri-band superconducting filter using stub-loaded stepped-impedance resonators
NASA Astrophysics Data System (ADS)
Feng, Yuning; Guo, Xubo; Wei, Bin; Zhang, Xiaoping; Song, Fei; Xu, Zhan; Cao, Bisong
2015-05-01
A stub-loaded stepped-impedance resonator (SLSIR) with three resonant modes is proposed to design a tri-band bandpass filter (BPF). The couplings between adjacent resonators at different resonant modes can be controlled independently by properly selecting the geometric parameters of the resonator. A dual-feeding structure is used to realize the required external couplings of the three passbands simultaneously. A fourth-order tri-band BPF with the passbands centered at 1.73, 2.40 and 3.45 GHz, respectively, is successfully designed and fabricated with superconducting thin films. The measured results exhibit high performance and agree well with the simulated ones.
Optimizing filtering for fast measurements in circuit QED
NASA Astrophysics Data System (ADS)
Gambetta, Jay; Dial, Oliver; Cross, Andrew; McClure, Douglas; Chow, Jerry; Steffen, Matthias
2014-03-01
Quantum error correction schemes, for example the popular surface code, involve running interleaved gate operations and measurements on a set of physical qubits. For this reason it is important to have fast measurements. In a fast measurement most of the information will be in the transients of the signal. In this talk we present a filtering technique to extract optimal qubit state information from the transient response of the resonator. I will also discuss techniques for rapidly driving the readout resonator to its ground state independent of the qubit state. We acknowledge support from IARPA under contract W911NF-10-1-0324.
Performance optimization of Gaussian apodized fiber Bragg grating filters in WDM systems
João L. Rebola; A. V. T. Cartazo
2002-01-01
Fiber Bragg gratings (FBGs) with Gaussian apodization profiles and zero de index change are studied extensively and optimized for optical filtering in 40-Gb\\/s single-channel and WDM systems with channel spacing of 100 and 200 GHz, for a single filter and for a cascade of optical filters. In the single-filter case, the optimized FBG leads practically to the same performance for
Quantum demolition filtering and optimal control of unstable systems.
Belavkin, V P
2012-11-28
A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one. PMID:23091216
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. PMID:23958491
On an Optimal Number of Time Steps for a Sequential Solution of an Elliptic-Hyperbolic
On an Optimal Number of Time Steps for a Sequential Solution of an Elliptic-Hyperbolic System) for the coupled system. We provide two procedures aimed at the estimation of an optimal set of time steps, and show that the resulting distribution of time steps yields better results than using equidistant time
NASA Astrophysics Data System (ADS)
Chomtong, P.; Akkaraekthalin, P.
2014-05-01
This paper presents a triple-band bandpass filter for applications of GSM, WiMAX, and WLAN systems. The proposed filter comprises of the tri-section step-impedance and capacitively loaded step-impedance resonators, which are combined using the cross coupling technique. Additionally, tapered lines are used to connect at both ports of the filter in order to enhance matching for the tri-band resonant frequencies. The filter can operate at the resonant frequencies of 1.8 GHz, 3.7 GHz, and 5.5 GHz. At resonant frequencies, the measured values of S11 are -17.2 dB, -33.6 dB, and -17.9 dB, while the measured values of S21 are -2.23 dB, -2.98 dB, and -3.31 dB, respectively. Moreover, the presented filter has compact size compared with the conventional open-loop cross coupling triple band bandpass filters
Optimizing Parameters of Process-Based Terrestrial Ecosystem Model with Particle Filter
NASA Astrophysics Data System (ADS)
Ito, A.
2014-12-01
Present terrestrial ecosystem models still contain substantial uncertainties, as model intercomparison studies have shown, because of poor model constraint by observational data. So, development of advanced methodology of data-model fusion, or data-assimilation, is an important task to reduce the uncertainties and improve model predictability. In this study, I apply the Particle filter (or Sequential Monte Carlo filer) to optimize parameters of a process-based terrestrial ecosystem model (VISIT). The Particle filter is one of the data-assimilation methods, in which probability distribution of model state is approximated by many samples of parameter set (i.e., particle). This is a computationally intensive method and applicable to nonlinear systems; this is an advantage of the method in comparison with other techniques like Ensemble Kalman filter and variational method. At several sites, I used flux measurement data of atmosphere-ecosystem CO2 exchange in sequential and non-sequential manners. In the sequential data assimilation, a time-series data at 30-min or daily steps were used to optimize gas-exchange-related parameters; this method would be also effective to assimilate satellite observational data. On the other hand, in the non-sequential case, annual or long-term mean budget was adjusted to observations; this method would be also effective to assimilate carbon stock data. Although there remain technical issues (e.g., appropriate number of particles and likelihood function), I demonstrate that the Partile filter is an effective method of data-assimilation for process-based models, enhancing collaboration between field and model researchers.
Gan, Wei; Liu, Xuemin; Sun, Jing
2015-02-01
This paper presents a method of regression evaluation index intelligent filter method (REIFM) for quick optimization of chromatographic separation conditions. The hierarchical chromatography response function was used as the chromatography-optimization index. The regression model was established by orthogonal regression design. The chromatography-optimization index was filtered by the intelligent filter program, and the optimization of the separation conditions was obtained. The experimental results showed that the average relative deviation between the experimental values and the predicted values was 0. 18% at the optimum and the optimization results were satisfactory. PMID:25989685
[Numerical simulation and operation optimization of biological filter].
Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing
2014-12-01
BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10. PMID:25826934
Optimal filters for the construction of the ensemble pulsar time
NASA Astrophysics Data System (ADS)
Rodin, Alexander E.
2008-07-01
An algorithm of the ensemble pulsar time based on the optimal Wiener filtration method has been constructed. This algorithm allows the separation of the contributions to the post-fit pulsar timing residuals of the atomic clock and the pulsar itself. Filters were designed using the cross- and auto-covariance functions of the timing residuals. The method has been applied to the timing data of millisecond pulsars PSR B1855+09 and B1937+21 and allowed the filtering out of the atomic-scale component from the pulsar data. Direct comparison of the terrestrial time TT(BIPM06) and the ensemble pulsar time PTens revealed that the fractional instability of TT(BIPM06)-PTens is equal to ?z = (0.8 +/- 1.9) × 10-15. Based on the ?z statistics of TT(BIPM06)-PTens, a new limit of the energy density of the gravitational wave background was calculated to be equal to ?gh2 ~ 3 × 10-9.
Design and optimization of digital filters without multipliers
NASA Astrophysics Data System (ADS)
Lueder, E.
1983-10-01
A method to design digital filters without multipliers is presented. The process requires no approximations. First, a second order structure, such as the first canonic form, is realized with coefficients in the CSD-Code. As a rule, equivalent structures then permit the reduction of either the number of adders or the time tau sub y needed to calculate one sample at the output or, in some cases permit the reduction of both simultaneously. As an example a part of a PCM lowpass with 50 Hz suppression is designed and optimized. For this circuit tau sub y = 2(tau sub A), with tau sub A the time needed for one addition; tau sub A = 30 ns allows for a sampling frequency of up to 16.5 MHz.
Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V.
2015-01-01
Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690
Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V
2015-01-01
Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690
Optimal initial perturbations for El Nino ensemble prediction with ensemble Kalman filter
Kang, In-Sik
Optimal initial perturbations for El Nino ensemble prediction with ensemble Kalman filter Yoo-Verlag 2009 Abstract A method for selecting optimal initial pertur- bations is developed within the framework of an ensemble Kalman filter (EnKF). Among the initial conditions gene- rated by EnKF, ensemble members with fast
Optimizing LPC filter parameters for multi-pulse excitation
Sharad Singhal; Bishnu S. Atal
1983-01-01
Present LPC analysis procedures assume that the input to the all-pole filter is white; the filter parameters are obtained by minimizing the mean-squared error between the filter output samples and their values obtained by linear prediction on the basis of past output samples. It is well known that these procedures often do not yield accurate filter parameters for periodic (or
NASA Astrophysics Data System (ADS)
Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul
2015-03-01
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
Optimizing spatial filters with kernel methods for BCI applications
NASA Astrophysics Data System (ADS)
Zhang, Jiacai; Tang, Jianjun; Yao, Li
2007-11-01
Brain Computer Interface (BCI) is a communication or control system in which the user's messages or commands do not depend on the brain's normal output channels. The key step of BCI technology is to find a reliable method to detect the particular brain signals, such as the alpha, beta and mu components in EEG/ECOG trials, and then translate it into usable control signals. In this paper, our objective is to introduce a novel approach that is able to extract the discriminative pattern from the non-stationary EEG signals based on the common spatial patterns(CSP) analysis combined with kernel methods. The basic idea of our Kernel CSP method is performing a nonlinear form of CSP by the use of kernel methods that can efficiently compute the common and distinct components in high dimensional feature spaces related to input space by some nonlinear map. The algorithm described here is tested off-line with dataset I from the BCI Competition 2005. Our experiments show that the spatial filters employed with kernel CSP can effectively extract discriminatory information from single-trial EGOG recorded during imagined movements. The high recognition of linear discriminative rates and computational simplicity of "Kernel Trick" make it a promising method for BCI systems.
Optimal stack filters under rank selection and structural constraints
P. Kuosmanen; J. Astola
1995-01-01
A new expression for the moments about the origin of the output of stack filtered data is derived in this paper. This expression is based on the A and M vectors containing the well-known coefficients Ai of stack filters and numbers M(?, ?, N, i) defined in this paper. The noise attenuation capability of any stack filter can now be
Optimal Sidelobe Reduction of Matched Filter for Bistatic Sonar
Bo Lei; Kunde Yang; Yong Wang
2012-01-01
For bistatic sonar, the weak signal of target is often buried by the side lobes of strong direct blast after pulse compression. A method is proposed in this paper to suppress the side lobes of matched filter output. The basic idea is to design an FIR filter at the output of matched filter, so that to minimize the ISL (Integrated
A Model for Optimizing Step Size of Learning Tasks in Competency-based Multimedia Practicals.
ERIC Educational Resources Information Center
Nadolski, Rob J.; Kirschner, Paul A.; van Merrienboer, Jeroen J. G.; Hummel, Hans G. K.
2001-01-01
Presents a two-phase instructional design model that focuses on optimizing step size in whole-task approaches to learning complex, mainly non-recurrent, cognitive skills. Step size in a multiple-step whole-task approach-needed for the process worksheets-is determined on the basis of estimated part-task complexity. A developmental study of the…
Sekar, R.
Automatic Synthesis of Filters to Discard Buffer Overflow Attacks: A Step Towards Realizing Self University {zliang,sekar,dand}@cs.sunysb.edu Abstract Buffer overflows have become the most common target, recovery from attacks can be very fast. We tested our ap- proach on 8 buffer overflow attacks reported
Tarek A. Elmitwalli; Kim L. T. Oahn; Grietje Zeeman; Gatze Lettinga
2002-01-01
The treatment of domestic sewage at low temperature of 13°C was investigated in a two-step system consisting of an anaerobic filter (AF) +an anaerobic hybrid (AH) reactor operated at different hydraulic retention times (HRTs). The AF reactor was efficient in the removal of suspended COD, viz. 81%, 58% and 57% at an HRT of, respectively, 4, 2 and 3h. For
A Two-Step Filtering approach for detecting maize and soybean phenology with time-series MODIS data
Gitelson, Anatoly
A Two-Step Filtering approach for detecting maize and soybean phenology with time-series MODIS data Soybean MODIS Shape-model fitting The crop developmental stage represents essential information) for detecting the phenological stages of maize and soybean from time-series Wide Dynamic Range Vegetation Index
A. Abate; M. C. Pressello; M. Benassi; L. Strigari
2009-01-01
The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied
Delayed, multi-step inverse structural filter for robust force identification
NASA Astrophysics Data System (ADS)
Allen, Matthew S.; Carne, Thomas G.
2008-07-01
An extension of the inverse structural filter (ISF) force reconstruction algorithm is presented that utilizes data from multiple time steps simultaneously to improve the accuracy and robustness of the ISF. The ISF algorithm uses a discrete-time system model of a structure and the measured response to estimate the forces causing the response. The proposed algorithm, dubbed the delayed, multi-step ISF (DMISF), is compared with the original ISF and with the sum of weighted accelerations technique (SWAT) and the classical frequency domain (FD) inverse method in terms of both accuracy and sensitivity to errors in the forward system model. The SWAT and ISF algorithms are capable of estimating the forces acting on a structure in real time, or when time data is available over such a short duration that FD methods cannot be applied effectively. The new DMISF can be created from a forward system model identified by any standard modal analysis algorithm, so one can leverage expertise with a particular system identification methodology. In contrast, the previously presented ISF was derived directly from experimental data using a proscribed technique. The theory behind the algorithms is presented, after which their performance is demonstrated using laboratory test data. The results of a Monte Carlo simulation are also presented, illustrating the nature of the sensitivity of the methods to errors in the modal parameters of the forward system. The DMISF algorithm is shown to yield a stable inverse system for the structure of interest whereas the traditional ISF is unstable, and hence gives erroneous estimates of the input forces.
Thin film characterization for modeling and optimization of silver-dielectric color filters.
Frey, Laurent; Parrein, Pascale; Virot, Léopold; Pellé, Catherine; Raby, Jacques
2014-03-10
We investigate the most appropriate way to optically characterize the materials and predict the spectral responses of metal-dielectric filters in the visible range. Special attention is given to thin silver layers that have a major impact on the filter's spectral transmittance and reflectance. Two characterization approaches are compared, based either on single layers, or on multilayer stacks, in approaching the filter design. The second approach is preferred, because it gives the best way to predict filter characteristics. Meanwhile, it provides a stack model and dispersion relations that can be used for filter design optimization. PMID:24663425
NASA Astrophysics Data System (ADS)
Zhang, Ning; Yuan, Xiaocong
2010-08-01
The authors report experimental results of optical edge enhancement using a modified filter, i.e. hybrid raised-cosine spiral phase filter (SPF). This filter is capable to produce optimized optical image processing results. Comparing with conventional SPF, the proposed filter is able to suppress redundant noise for better contrast and resolution of the edge-enhanced image with improved efficiency. The proposed filtering process is demonstrated using off-axis holograms displayed on a spatial light modulator (SLM) and can be readily incorporated with conventional microscopic system.
Design of waveguide E-plane filters with all-metal inserts by equal ripple optimization
NASA Astrophysics Data System (ADS)
Postoyalko, Vasil; Budimir, D. S.
1994-02-01
An optimization based approach to the design of E-plane filters is described. An optimization procedure based on Cohn's equal ripple optimization is developed. This vector procedure has several advantages over the general purpose optimization routines previously applied to the design of E-plane filters. The problem of local minima does not arise. Optimization is carried out with respect to the Chebyshev (or minimax) criteria. Less frequency sampling and therefore less calculation of the electrical parameters of E-plane discontinuities is required. The design of a symmetrical E-plane filter is considered. Higher order mode interaction between E-plane discontinuities is not included in the design. For the design example considered this is shown not to be significant. A numerically efficient method, requiring only real scalar arithmetic, for calculating the insertion loss of a symmetrical cascade of lossless symmetrical 2-ports is employed. Measurements on a fabricated filter confirm the accuracy of the design procedure.
Step-wise Optimal Cache Replacement for Wireless Data Access In Next Generation Wireless Internet
Hui Chen; Yang Xiao; Xuemin Shen
2006-01-01
Most of existing cache replacement policies are access-based replacement policies where update process is ignored. However, update information is extremely important. In this paper, we provide a deep analysis on cache access algorithms, and propose a step-wise optimal update-based replacement policy, called update-based step-wise optimal (USO) scheme, to optimize transmission cost and effective hit ratio at each replacement. Unlike traditional
Optimal step-stress test under progressive type-I censoring
Evans Gouno; Ananda Sen; N. Balakrishnan
2004-01-01
We consider in this work a k-step-stress accelerated test with equal duration steps ?. Censoring is allowed at each change stress point i?, i=1,...k. The problem of choosing the optimal ? is addressed using variance optimality as well as determinant-optimality criteria. We investigate in detail the case of progressively Type-I right censored data with a single stress variable.
Chaiprapat, Sumate; Charnnok, Boonya; Kantachote, Duangporn; Sung, Shihwu
2015-03-01
Triple stage and single stage biotrickling filters (T-BTF and S-BTF) were operated with oxygenated liquid recirculation to enhance bio-desulfurization of biogas. Empty bed retention time (EBRT 100-180 s) and liquid recirculation velocity (q 2.4-7.1 m/h) were applied. H2S removal and sulfuric acid recovery increased with higher EBRT and q. But the highest q at 7.1 m/h induced large amount of liquid through the media, causing a reduction in bed porosity in S-BTF and H2S removal. Equivalent performance of S-BTF and T-BTF was obtained under the lowest loading of 165 gH2S/m(3)/h. In the subsequent continuous operation test, it was found that T-BTF could maintain higher H2S elimination capacity and removal efficiency at 175.6±41.6 gH2S/m(3)/h and 89.0±6.8% versus S-BTF at 159.9±42.8 gH2S/m(3)/h and 80.1±10.2%, respectively. Finally, the relationship between outlet concentration and bed height was modeled. Step feeding of oxygenated liquid recirculation in multiple stages clearly demonstrated an advantage for sulfide oxidation. PMID:25569031
Optimal stability for trapezoidal-backward difference split-steps
Dharmaraja, Sohan
The marginal stability of the trapezoidal method makes it dangerous to use for highly non-linear oscillations. Damping is provided by backward differences. The split-step combination (??t trapezoidal, (1 – ?)?t for BDF2) ...
NASA Astrophysics Data System (ADS)
Shmaliy, Yuriy S.; Ibarra-Manzano, Oscar
2012-12-01
We address p-shift finite impulse response optimal (OFIR) and unbiased (UFIR) algorithms for predictive filtering ( p > 0), filtering ( p = 0), and smoothing filtering ( p < 0) at a discrete point n over N neighboring points. The algorithms were designed for linear time-invariant state-space signal models with white Gaussian noise. The OFIR filter self-determines the initial mean square state function by solving the discrete algebraic Riccati equation. The UFIR one represented both in the batch and iterative Kalman-like forms does not require the noise covariances and initial errors. An example of applications is given for smoothing and predictive filtering of a two-state polynomial model. Based upon this example, we show that exact optimality is redundant when N ? 1 and still a nice suboptimal estimate can fairly be provided with a UFIR filter at a much lower cost.
Backus, Sterling J. (Erie, CO); Kapteyn, Henry C. (Boulder, CO)
2007-07-10
A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.
Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization
Cho, Sung-Bae
Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization t i c l e i n f o Keywords: Fingerprint image generation Evolutionary algorithm Image filters Input pressure a b s t r a c t Constructing a fingerprint database is important to evaluate the performance
An Optimal Frequency Domain Filter for Edge Detection in Digital Pictures
K. Sam Shanmugam; Fred M. Dickey; James A. Green
1979-01-01
Edge detection and enhancement are widely used in image processing applications. In this paper we consider the problem of optimizing spatial frequency domain filters for detecting edges in digital pictures. The filter is optimum in that it produces maximum energy within a resolution interval of specified width in the vicinity of the edge. We show that, in the continuous case,
Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared
Cunningham, Brian
Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings@illinois.edu Abstract: An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed
Particle swarm optimization-based approach for optical finite impulse response filter design
Wu, Shin-Tson
Particle swarm optimization-based approach for optical finite impulse response filter design Ying method for the design of an optical finite impulse response FIR filter by employing a particle swarm- lished.8 Here, we employ the particle swarm opti- mization PSO technique as proposed by Kennedy
Optimal Sharpening of CIC Filters and An Efficient Implementation Through Saramaki-Ritoniemi
Candan, Cagatay
1 Optimal Sharpening of CIC Filters and An Efficient Implementation Through Saram, Ankara, Turkey. email: ccandan@metu.edu.tr Abstract--Conventional sharpened cascaded-integrator-comb (CIC) filters use generic sharpening polynomials to improve the frequency response. In contrast to the existing
Optimization of 3D Shape Sharpening Filter Based on Geometric Statistical Values
Tokyo, University of
Optimization of 3D Shape Sharpening Filter Based on Geometric Statistical Values Masanari Yokomizo, by applying a sharpening filter to the 3D shape data of a plaster statue, highlighted contours compa- rable is to prepare a stone statue that is used as a reference and to sharpen the input data to match the histogram
Statistical Design and Optimization for Adaptive Post-silicon Tuning of MEMS Filters
Li, Xin
Statistical Design and Optimization for Adaptive Post-silicon Tuning of MEMS Filters Fa Wang, Gokce of microelectro-mechanical systems (MEMS) for RF (radio frequency) applications. In this paper we describe a novel technique of adaptive post-silicon tuning to reliably design MEMS filters that are robust to process
Designing optimal spatial filters for single-trial EEG classification in a movement task
Johannes Müller-gerking; Gert Pfurtscheller; Henrik Flyvbjerg
1998-01-01
We devise spatial filters for multi-channel EEG that lead to signals which discriminate optimally between two conditions. We demonstrate the effectiveness of this method by classifying single-trial EEGs, recorded during preparation for movements of left or right index finger or right foot. Best classification rates for 3 subjects were 94%, 90% and 84%, respectively. The filters are estimated from a
Inertial measurement unit calibration using Full Information Maximum Likelihood Optimal Filtering
Thompson, Gordon A. (Gordon Alexander)
2005-01-01
The robustness of Full Information Maximum Likelihood Optimal Filtering (FIMLOF) for inertial measurement unit (IMU) calibration in high-g centrifuge environments is considered. FIMLOF uses an approximate Newton's Method ...
Optimization of atomic Faraday filters in the presence of homogeneous line broadening
NASA Astrophysics Data System (ADS)
Zentile, Mark A.; Keaveney, James; Mathew, Renju S.; Whiting, Daniel J.; Adams, Charles S.; Hughes, Ifan G.
2015-09-01
We show that homogeneous line broadening drastically affects the performance of atomic Faraday filters. We study the effects of cell length and find that the behaviour of ‘line-centre’ filters are quite different from ‘wing-type’ filters, where the effect of self-broadening is found to be particularly important. We use a computer optimization algorithm to find the best magnetic field and temperature for Faraday filters with a range of cell lengths, and experimentally realize one particular example using a micro-fabricated 87Rb vapour cell. We find excellent agreement between our theoretical model and experimental data.
Ioan Tabus; Doina Petrescu; Moncef Gabbouj
1996-01-01
A training framework is developed in this paper to design optimal nonlinear filters for various signal and image processing tasks. The targeted families of nonlinear filters are the Boolean filters and stack filters. The main merit of this framework at the implementation level is perhaps the absence of constraining models, making it nearly universal in terms of application areas. We
The optimal design of photonic crystal optical devices with step-wise linear refractive index
NASA Astrophysics Data System (ADS)
Ma, Ji; Wu, Xiang-Yao; Li, Hai-Bo; Li, Hong; Liu, Xiao-Jing; Zhang, Si-Qi; Chen, Wan-Jin; Wu, Yi-Heng
2015-10-01
In the paper, we have studied one-dimensional step-wise linear photonic crystal with and without defect layer, and analyzed the effect of defect layer position, thickness, refractive index real part and imaginary part on the transmissivity, electric field distribution and output electric field intensity. By calculation, we have obtained a set of optimal parameters, which can be optimally designed optical device, such as optical amplifier, attenuator, optical diode by the step-wise linear photonic crystal.
Optimization of multiplexed holographic gratings in PQ-PMMA for spectral-spatial imaging filters.
Luo, Yuan; Gelsinger, Paul J; Barton, Jennifer K; Barbastathis, George; Kostuk, Raymond K
2008-03-15
Holographic gratings formed in thick phenanthrenquinone- (PQ-) doped poly(methyl methacrylate) (PMMA) can be made to have narrowband spectral and spatial transmittance filtering properties. We present the design and performance of angle-multiplexed holographic filters formed in PQ-PMMA at 488 nm and reconstructed with a LED operated at approximately 630 nm. The dark delay time between exposure and the preillumination exposure of the polymer prior to exposure of the holographic area are varied to optimize the diffraction efficiency of multiplexed holographic filters. The resultant holographic filters can enhance the performance of four-dimensional spatial-spectral imaging systems. The optimized filters are used to simultaneously sample spatial and spectral information at five different depths separated by 50 microm within biological tissue samples. PMID:18347711
The design of an optimal filter for monthly GRACE gravity models
NASA Astrophysics Data System (ADS)
Klees, R.; Revtova, E. A.; Gunter, B. C.; Ditmar, P.; Oudman, E.; Winsemius, H. C.; Savenije, H. H. G.
2008-11-01
Most applications of the publicly released Gravity Recovery and Climate Experiment monthly gravity field models require the application of a spatial filter to help suppressing noise and other systematic errors present in the data. The most common approach makes use of a simple Gaussian averaging process, which is often combined with a `destriping' technique in which coefficient correlations within a given degree are removed. As brute force methods, neither of these techniques takes into consideration the statistical information from the gravity solution itself and, while they perform well overall, they can often end up removing more signal than necessary. Other optimal filters have been proposed in the literature; however, none have attempted to make full use of all information available from the monthly solutions. By examining the underlying principles of filter design, a filter has been developed that incorporates the noise and full signal variance-covariance matrix to tailor the filter to the error characteristics of a particular monthly solution. The filter is both anisotropic and non-symmetric, meaning it can accommodate noise of an arbitrary shape, such as the characteristic stripes. The filter minimizes the mean-square error and, in this sense, can be considered as the most optimal filter possible. Through both simulated and real data scenarios, this improved filter will be shown to preserve the highest amount of gravity signal when compared to other standard techniques, while simultaneously minimizing leakage effects and producing smooth solutions in areas of low signal.
NASA Astrophysics Data System (ADS)
Zhou, Di; Zhang, Yong-An; Duan, Guang-Ren
The two-step filter has been combined with a modified Sage-Husa time-varying measurement noise statistical estimator, which is able to estimate the covariance of measurement noise on line, to generate an adaptive two-step filter. In many practical applications such as the bearings-only guidance, some model parameters and the process noise covariance are also unknown a priori. Based on the adaptive two-step filter, we utilize multiple models in the first-step filtering as well as in the time update of the second-step filtering to handle the uncertainties of model parameters and process noise covariance. In each timestep of the multiple model filtering, probabilistic weights punishing the estimates of first-step state from different models, and their associated covariance matrices are acquired according to Bayes’ rule. The weighted sum of the estimates of first-step state and that of the associated covariance matrices are extracted as the ultimate estimate and covariance of the first-step state, and are used as measurement information for the measurement update of the second-step state. Thus there is still only one iteration process and no apparent enhancement of computation burden. A motion tracking sliding-mode guidance law is presented for missiles with non-negligible delays in actual acceleration. This guidance law guarantees guidance accuracy and is able to enhance observability in bearings-only tracking. In bearings-only cases, the multiple model adaptive two-step filter is applied to the motion tracking sliding-mode guidance law, supplying relative range, relative velocity, and target acceleration information. In simulation experiments satisfactory filtering and guidance results are obtained, even if the filter runs into unknown target maneuvers and unknown time-varying measurement noise covariance, and the guidance law has to deal with a large time lag in acceleration.
Discounting in economic evaluations: stepping forward towards optimal decision rules.
Gravelle, Hugh; Brouwer, Werner; Niessen, Louis; Postma, Maarten; Rutten, Frans
2007-03-01
The National Institute for Clinical Excellence has recently changed its guidelines on discounting costs and effects in economic evaluations. In common with most other regulatory bodies it now requires that health effects should be discounted at the same rate as costs. We show that the guideline leads to sub-optimal decisions because it fails to account for the changing value of health. NICE (and other regulatory bodies) should either use differential discounting or stipulate how the changing value of health should otherwise be dealt with. We also show how binding health service budget constraints should be incorporated in evaluations. PMID:17006970
Bayesian-Optimal Image Reconstruction for Translational-Symmetric Filters
NASA Astrophysics Data System (ADS)
Tajima, Satohiro; Inoue, Masato; Okada, Masato
2008-05-01
Translational-symmetric filters provide a foundation for various kinds of image processing. When a filtered image containing noise is observed, the original one can be reconstructed by Bayesian inference. Furthermore, hyperparameters such as the smoothness of the image and the noise level in the communication channel through which the image observed can be estimated from the observed image by setting a criterion of maximizing marginalized likelihood. In this article we apply a diagonalization technique with the Fourier transform to this image reconstruction problem. This diagonalization not only reduces computational costs but also facilitates theoretical analyses of the estimation and reconstruction performances. We take as an example the Mexican-hat shaped neural cell receptive field seen in the early visual systems of animals, and we compare the reconstruction performances obtained under various hyperparameter and filter parameter conditions with each other and with the corresponding performances obtained under no-filter conditions. The results show that the using a Mexican-hat filter can reduce reconstruction error.
Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.
2011-01-01
An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm?1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm?1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445
NASA Astrophysics Data System (ADS)
Cuevas, Andres; Diaz-Ramirez, Victor H.; Kober, Vitaly; Trujillo, Leonardo
2014-09-01
Facial recognition is a difficult task due to variations in pose and facial expressions, as well as presence of noise and clutter in captured face images. In this work, we address facial recognition by means of composite correlation filters designed with multi-objective combinatorial optimization. Given a large set of available face images having variations in pose, gesticulations, and global illumination, a proposed algorithm synthesizes composite correlation filters by optimization of several performance criteria. The resultant filters are able to reliably detect and correctly classify face images of different subjects even when they are corrupted with additive noise and nonhomogeneous illumination. Computer simulation results obtained with the proposed approach are presented and discussed in terms of efficiency in face detection and reliability of facial classification. These results are also compared with those obtained with existing composite filters.
On the application of optimal wavelet filter banks for ECG signal classification
NASA Astrophysics Data System (ADS)
Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvão, R. K. H.
2014-03-01
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.
Optimal morphological hit-or-miss filtering of gray-level images
NASA Astrophysics Data System (ADS)
Dougherty, Edward R.
1993-05-01
The binary hit-or-miss transform is applied to filter digital gray-scale signals. This is accomplished by applying a union of hit-or-miss transforms to an observed signal's umbra and then taking the surface of the filtered umbra as the estimate of the ideal signal. The hit-or-miss union is constructed to provide the optimal mean-absolute-error filter for both the ideal signal and its umbra. The method is developed in detail for thinning hit-or-miss filters and applies at once to the dual thickening filters. It requires the output of the umbra filter to be an umbra, which in general is not true. A key aspect of the paper is the complete characterization of umbra-preserving union-of-hit-or-miss thinning and thickening filters. Taken together, the mean-absolute-error theory and the umbra-preservation characterization provide a full characterization of binary hit-or-miss filtering as applied to digital gray-scale signals. The theory is at once applicable to hit-or-miss filtering of digital gray-scale signals via the three- dimensional binary hit-or-miss transform.
Optimizing the Choice of Filter Sets for Space Based Imaging Instruments
NASA Astrophysics Data System (ADS)
Elliott, Rachel E.; Farrah, Duncan; Petty, Sara M.; Harris, Kathryn Amy
2015-01-01
We investigate the challenge of selecting a limited number of filters for space based imaging instruments such that they are able to address multiple heterogeneous science goals. The number of available filter slots for a mission is bounded by factors such as instrument size and cost. We explore methods used to extract the optimal group of filters such that they complement each other most effectively. We focus on three approaches; maximizing the separation of objects in two-dimensional color planes, SED fitting to select those filter sets that give the finest resolution in fitted physical parameters, and maximizing the orthogonality of physical parameter vectors in N-dimensional color-color space. These techniques are applied to a test-case, a UV/optical imager with space for five filters, with the goal of measuring the properties of local stars through to distant galaxies.
Designing Linear Phase FIR Filters with Particle Swarm Optimization and Harmony Search
NASA Astrophysics Data System (ADS)
Shirvani, Abdolreza; Khezri, Kaveh; Razzazi, Farbod; Lucas, Caro
In recent years, evolutionary methods have shown great success in solving many combinatorial optimization problems such as FIR (Finite Impulse Response) filter design. An ordinary method in FIR filter design problem is Parks-McClellan, which is both difficult to implement and computationally expensive. The goal of this paper is to design a near optimal linear phase FIR filter using two recent evolutionary approaches; Particle Swarm Optimization (PSO) and Harmony Search (HS). These methods are robust, easy to implement, and they would not trap in local optima due to their stochastic behavior. In addition, they have distinguishing features such as less variance error and smaller overshoots in both stop and pass bands. To prove these benefits, two case studies are presented and obtained results are compared with previous implementations. In both cases, better and reliable results are achieved.
Design and optimization of high reflectance graded index optical filter with quintic apodization
NASA Astrophysics Data System (ADS)
Praveen Kumar, Vemuri S. R. S.; Sunita, Parinam; Kumar, Mukesh; Rao, Parinam Krishna; Kumari, Neelam; Karar, Vinod; Sharma, Amit L.
2015-06-01
Rugate filters are a special kind of graded-index films that may provide advantages in both, optical performance and mechanical properties of the optical coatings. In this work, design and optimization of a high reflection rugate filter having reflection peak at 540nm has been presented which has been further optimized for side-lobe suppression. A suitable number of apodization and matching layers, generated through Quintic function, were added to the basic sinusoidal refractive index profile to achieve high reflectance of around 80% in the rejection window for normal incidence. Smaller index contrast between successive layers in the present design leads to less residual stress in the thinfilm stack which enhances the adhesion and mechanical strength of the filter. The optimized results show excellent side lobe suppression achieved around the stopband.
Two-stage hybrid optimization of fiber Bragg gratings for design of linear phase filters
NASA Astrophysics Data System (ADS)
Zheng, Rui Tao; Ngo, Nam Quoc; Binh, Le Nguyen; Tjin, Swee Chuan
2004-12-01
We present a new hybrid optimization method for the synthesis of fiber Bragg gratings (FBGs) with complex characteristics. The hybrid optimization method is a two-tier search that employs a global optimization algorithm [i.e., the tabu search (TS) algorithm] and a local optimization method (i.e., the quasi-Netwon method). First the TS global optimization algorithm is used to find a ``promising'' FBG structure that has a spectral response as close as possible to the targeted spectral response. Then the quasi-Newton local optimization method is applied to further optimize the FBG structure obtained from the TS algorithm to arrive at a targeted spectral response. A dynamic mechanism for weighting of different requirements of the spectral response is employed to enhance the optimization efficiency. To demonstrate the effectiveness of the method, the synthesis of three linear-phase optical filters based on FBGs with different grating lengths is described.
Lyngsie, G; Borggaard, O K; Hansen, H C B
2014-03-15
Phosphorus (P) eutrophication of lakes and streams, coming from drained farmlands, is a serious problem in areas with intensive agriculture. Installation of P sorbing filters at drain outlets may be a solution. Efficient sorbents to be used for such filters must possess high P bonding affinity to retain ortho-phosphate (Pi) at low concentrations. In addition high P sorption capacity, fast bonding and low desorption is necessary. In this study five potential filter materials (Filtralite-P(®), limestone, calcinated diatomaceous earth, shell-sand and iron-oxide based CFH) in four particle size intervals were investigated under field relevant P concentrations (0-161 ?M) and retentions times of 0-24 min. Of the five materials examined, the results from P sorption and desorption studies clearly demonstrate that the iron based CFH is superior as a filter material compared to calcium based materials when tested against criteria for sorption affinity, capacity and stability. The finest CFH and Filtralite-P(®) fractions (0.05-0.5 mm) were best with P retention of ?90% of Pi from an initial concentration of 161 ?M corresponding to 14.5 mmol/kg sorbed within 24 min. They were further capable to retain ?90% of Pi from an initially 16 ?M solution within 1½ min. However, only the finest CFH fraction was also able to retain ?90% of Pi sorbed from the 16 ?M solution against 4 times desorption sequences with 6 mM KNO3. Among the materials investigated, the finest CFH fraction is therefore the only suitable filter material, when very fast and strong bonding of high Pi concentrations is needed, e.g. in drains under P rich soils during extreme weather conditions. PMID:24275107
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
Georg Jäger; Ulrich Hohenester
2013-09-07
We theoretically investigate protocols based on optimal control theory (OCT) for manipulating Bose-Einstein condensates in magnetic microtraps, using the framework of the Gross-Pitaevskii equation. In our approach we explicitly account for filter functions that distort the computed optimal control, a situation inherent to many experimental OCT implementations. We apply our scheme to the shakeup process of a condensate from the ground to the first excited state, following a recent experimental and theoretical study, and demonstrate that the fidelity of OCT protocols is not significantly deteriorated by typical filters.
Optimized split-step method for modeling nonlinear pulse propagation in fiber Bragg gratings
Toroker, Zeev; Horowitz, Moshe
2008-03-15
We present an optimized split-step method for solving nonlinear coupled-mode equations that model wave propagation in nonlinear fiber Bragg gratings. By separately controlling the spatial and the temporal step size of the solution, we could significantly decrease the run time duration without significantly affecting the result accuracy. The accuracy of the method and the dependence of the error on the algorithm parameters are studied in several examples. Physical considerations are given to determine the required resolution.
Improved design and optimization of subsurface flow constructed wetlands and sand filters
NASA Astrophysics Data System (ADS)
Brovelli, A.; Carranza-Díaz, O.; Rossi, L.; Barry, D. A.
2010-05-01
Subsurface flow constructed wetlands and sand filters are engineered systems capable of eliminating a wide range of pollutants from wastewater. These devices are easy to operate, flexible and have low maintenance costs. For these reasons, they are particularly suitable for small settlements and isolated farms and their use has substantially increased in the last 15 years. Furthermore, they are also becoming used as a tertiary - polishing - step in traditional treatment plants. Recent work observed that research is however still necessary to understand better the biogeochemical processes occurring in the porous substrate, their mutual interactions and feedbacks, and ultimately to identify the optimal conditions to degrade or remove from the wastewater both traditional and anthropogenic recalcitrant pollutants, such as hydrocarbons, pharmaceuticals, personal care products. Optimal pollutant elimination is achieved if the contact time between microbial biomass and the contaminated water is sufficiently long. The contact time depends on the hydraulic residence time distribution (HRTD) and is controlled by the hydrodynamic properties of the system. Previous reports noted that poor hydrodynamic behaviour is frequent, with water flowing mainly through preferential paths resulting in a broad HRTD. In such systems the flow rate must be decreased to allow a sufficient proportion of the wastewater to experience the minimum residence time. The pollutant removal efficiency can therefore be significantly reduced, potentially leading to the failure of the system. The aim of this work was to analyse the effect of the heterogeneous distribution of the hydraulic properties of the porous substrate on the HRTD and treatment efficiency, and to develop an improved design methodology to reduce the risk of system failure and to optimize existing systems showing poor hydrodynamics. Numerical modelling was used to evaluate the effect of substrate heterogeneity on the breakthrough curves of both a conservative tracer and a reactive organic compound. Random, spatially correlated hydraulic conductivity fields following a log-normal distribution were generated to represent the heterogeneous distribution of the hydraulic properties. The effect of the variance of the hydraulic conductivity distribution, as well as the aspect ratio of the correlation lengths were analyzed and compared to experimental findings. The proposed design methodology is based on the target hydraulic residence time, that is, the residence time required to achieve the degradation of the contaminants. The effect of the heterogeneity is accounted for using a Monte Carlo approach. From the analysis of the simulation results the probability of failure of the system can be estimated and used to design a new system or optimize existing systems. The methodology was illustrated using a realistic test case with water contaminated with benzene.
T. Fortmann; B. Anderson
1973-01-01
The Karhunen-Loève expansion of a random process is used to derive the impulse response of the optimal realizable linear estimator for the process. The expansion is truncated to yield an approximate state-variable model of the process in terms of the firstNeigenvalues and eigenfunctions. The Kalman-Bucy filter for this model provides an approximate realizable linear estimator which approaches the optimal one
Bondugula, Srikant
2010-07-14
Damnjanovic Daren B.H. Cline Head of Department, David V. Rosowsky May 2009 Major Subject: Civil Engineering iii ABSTRACT Optimal Control of Projects Based on Kalman Filter Approach for Tracking & Forecasting the Project... of complex construction process for yielding the optimal control policies. v DEDICATION DEDICATED TO MY FAMILY AND FRIENDS vi ACKNOWLEDGEMENTS I take this opportunity to express my sincere thanks to my thesis...
Stratified Filtered Sampling in Stochastic Optimization John M. Mulvey
Mitchell, John E.
significant problems dictate the development of strategies for handling sequential decision-making under of a decision strategy returned by the MSO process is crucial to increasing the technology's effectiveness Abstract We develop a methodology for evaluating a decision strategy generated by a stochastic optimization
California at Santa Barbara, University of
Digital Filter Stepsize Control of DASPK and its Effect on Control Optimization Performance.1mod . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3 Halo Insertion DASPK3.1 . . . . . . . . . . . . . . . . . . . . . . 53 3.4 Halo Insertion DASPK3.1mod . . . . . . . . . . . . . . . . . . . . 57 3.5 Moon DASPK3
Experimental study on optimization of the working conditions of excited state Faraday filter
Liang Zhang; Junxiong Tang
1998-01-01
In this paper the existence of optimal frequency detuning in the pumping process of the excited state Faraday anomalous dispersion optical filter (ESFADOF, also referred as active FADOF) is reported. We measured this detuning and its variation versus cell temperature. Moreover, the dependence of the ESFADOF transmission on the cell temperature and pumping power was also studied experimentally. On the
Junghyun Kwon; Kyoung Mu Lee; Frank Chongwoo Park
2009-01-01
We propose a geometric method for visual tracking, in which the 2-D affine motion of a given object template is estimated in a video sequence by means of coordinate- invariant particle filtering on the 2-D affine group Aff(2). Tracking performance is further enhanced through a geo- metrically defined optimal importance function, obtained explicitly via Taylor expansion of a principal component
Optimal filters for detecting cosmic bubble collisions J. D. McEwen,1,
McEwen, Jason
Optimal filters for detecting cosmic bubble collisions J. D. McEwen,1, S. M. Feeney,1, M. C such example is the signature of cosmic bubble collisions which arise in models of eternal inflation. The most of the global parameters defining the theory; however, a direct evaluation is computationally impractical
DATA ANALYSIS FOR THE RESONANT GRAVITATIONAL WAVE DETECTOR AURIGA: OPTIMAL FILTERING, # 2
## # DATA ANALYSIS FOR THE RESONANT GRAVITATIONAL WAVE DETECTOR AURIGA: OPTIMAL FILTERING, # 2 the possibilities in signal processing opened by the new fully numerical data analysis system developed reprocessing. Then we discuss some relevant points of the AURIGA data analysis system such as the data
DMT Bit Rate Maximization With Optimal Time Domain Equalizer Filter Bank Architecture
Evans, Brian L.
DMT Bit Rate Maximization With Optimal Time Domain Equalizer Filter Bank Architecture Milos-tone (DMT) is a multicarrier modula- tion method in which the available bandwidth of a com- munication create nearly orthogonal subchannels. DMT has been standardized in [1, 2, 3, 4]. A similar multi- carrier
ANALYTICAL CALCULATION OF GRADIENTS FOR THE OPTIMIZATION OF HPLANE FILTERS WITH THE FEM
Bornemann, Jens
ANALYTICAL CALCULATION OF GRADIENTS FOR THE OPTIMIZATION OF HPLANE FILTERS WITH THE FEM P Abstract This paper introduces a method for the analytical calculation of gradients of a cost functions circumstances, the gradient of a cost function can be calculated analytically without using finite differences
Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin
Claridge, Ela
Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin Stephen J the error associated with histological parameters characterizing normal skin tissue. These parameters can be recovered from digital images of the skin using a physics-based model of skin coloration. The relationship
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
A T-Step Ahead Constrained Optimal Target Detection Algorithm for a Multi Sensor Surveillance System
Krishna, K. Madhava
A T-Step Ahead Constrained Optimal Target Detection Algorithm for a Multi Sensor Surveillance surveillance system. The system consists of mobile sensors that guard a rectangular surveillance zone crisscrossed by moving targets. Targets penetrate the surveillance zone with poisson rates at uniform
A Comparison of Staggered-Grid, Conventional One-step, and Optimally Accurate Finite-Difference
Geller, Robert
quantified by the CPU time required to attain a given level of accuracy) of various finite-difference (FD order in space, O(2,4), is equivalent to a non-optimally accurate one-step FD scheme with a seven point than to compute synthetics that will be compared quantitatively to observed data to invert for Earth
Image quality and dose optimization using novel x-ray source filters tailored to patient size
NASA Astrophysics Data System (ADS)
Toth, Thomas L.; Cesmeli, Erdogan; Ikhlef, Aziz; Horiuchi, Tetsuya
2005-04-01
The expanding set of CT clinical applications demands increased attention to obtaining the maximum image quality at the lowest possible dose. Pre-patient beam shaping filters provide an effective means to improve dose utilization. In this paper we develop and apply characterization methods that lead to a set of filters appropriately matched to the patient. We developed computer models to estimate image noise and a patient size adjusted CTDI dose. The noise model is based on polychromatic X-ray calculations. The dose model is empirically derived by fitting CTDI style dose measurements for a demographically representative set of phantom sizes and shapes with various beam shaping filters. The models were validated and used to determine the optimum IQ vs dose for a range of patient sizes. The models clearly show that an optimum beam shaping filter exists as a function of object diameter. Based on noise and dose alone, overall dose efficiency advantages of 50% were obtained by matching the filter shape to the size of the object. A set of patient matching filters are used in the GE LightSpeed VCT and Pro32 to provide a practical solution for optimum image quality at the lowest possible dose over the range of patient sizes and clinical applications. Moreover, these filters mark the beginning of personalized medicine where CT scanner image quality and radiation dose utilization is truly individualized and optimized to the patient being scanned.
Mirzaalian, Hengameh; Lee, Tim K; Hamarneh, Ghassan
2014-12-01
Hair occlusion is one of the main challenges facing automatic lesion segmentation and feature extraction for skin cancer applications. We propose a novel method for simultaneously enhancing both light and dark hairs with variable widths, from dermoscopic images, without the prior knowledge of the hair color. We measure hair tubularness using a quaternion color curvature filter. We extract optimal hair features (tubularness, scale, and orientation) using Markov random field theory and multilabel optimization. We also develop a novel dual-channel matched filter to enhance hair pixels in the dermoscopic images while suppressing irrelevant skin pixels. We evaluate the hair enhancement capabilities of our method on hair-occluded images generated via our new hair simulation algorithm. Since hair enhancement is an intermediate step in a computer-aided diagnosis system for analyzing dermoscopic images, we validate our method and compare it to other methods by studying its effect on: 1) hair segmentation accuracy; 2) image inpainting quality; and 3) image classification accuracy. The validation results on 40 real clinical dermoscopic images and 94 synthetic data demonstrate that our approach outperforms competing hair enhancement methods. PMID:25312927
Constraining clumpy dusty torus models using optimized filter sets
Almeida, A Asensio Ramos \\and C Ramos
2012-01-01
Recent success in explaining several properties of the dusty torus around the central engine of active galactic nuclei has been gathered with the assumption of clumpiness. The properties of such clumpy dusty tori can be inferred by analyzing spectral energy distributions (SEDs), sometimes with scarce sampling given that large aperture telescopes and long integration times are needed to get good spatial resolution and signal. We aim at using the information already present in the data and the assumption of clumpy dusty torus, in particular, the CLUMPY models of Nenkova et al., to evaluate the optimum next observation such that we maximize the constraining power of the new observed photometric point. To this end, we use the existing and barely applied idea of Bayesian adaptive exploration, a mixture of Bayesian inference, prediction and decision theories. The result is that the new photometric filter to use is the one that maximizes the expected utility, which we approximate with the entropy of the predictive d...
Two-step optimization of pressure and recovery of reverse osmosis desalination process.
Liang, Shuang; Liu, Cui; Song, Lianfa
2009-05-01
Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed. PMID:19534146
Optimized one-step preparation of a bioactive natural product, guaiazulene-2,9-dione
NASA Astrophysics Data System (ADS)
Cheng, Canling; Li, Pinglin; Wang, Wei; Shi, Xuefeng; Zhang, Gang; Zhu, Hongyan; Wu, Rongcui; Tang, Xuli; Li, Guoqiang
2014-12-01
We previously isolated a natural product, namely guaiazulene-2,9-dione showing strong antibacterial activity against Vibrio anguillarum, from a gorgonian Muriceides collaris collected in South China Sea. In this experiment, guaiazulene-2,9-dione was quantitatively synthesized with an optimized one-step bromine oxidation method using guaiazulene as the raw material. The key reaction condition including reaction time and temperature, drop rate of bromine, concentration of aqueous THF solution, respective molar ratio of guaiazulene to bromine and acetic acid, and concentration of guaiazulene in aqueous THF solution, were investigated individually at five levels each for optimization. Combined with the verification test to show the absolute yield of each optimization step, the final optimal condition was determined as: when a solution of 0.025 mmol mL-1 guaiazulene in 80% aqueous THF was treated with four volumes of bromine at a drop rate of 0.1 mL min-1 and four volumes of acetic acid at -5°C for three hours, the yield of guaiazulene-2,9-dione was 23.72%. This was the first report concerning optimized one-step synthesis to provide a convenient method for the large preparation of guaiazulene-2,9-dione.
Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude
NASA Technical Reports Server (NTRS)
Sedlak, J.
1994-01-01
Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.
Design and optimization of stepped austempered ductile iron using characterization techniques
Hernández-Rivera, J.L., E-mail: jose.hernandez@cimav.edu.mx [Centro de Investigación en Materiales Avanzados-Laboratorio Nacional de Nanotecnología, Miguel de Cervantes 120, Z.C. 31109, Chihuahua (Mexico); Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J. [Facultad de Ingeniería, Universidad Autónoma de San Luis Potosí, Sierra Leona 550, Lomas 2a. sección, Z.C. 78210, San Luis Potosí (Mexico)
2013-09-15
Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.
Decoupled Control Strategy of Grid Interactive Inverter System with Optimal LCL Filter Design
NASA Astrophysics Data System (ADS)
Babu, B. Chitti; Anurag, Anup; Sowmya, Tontepu; Marandi, Debati; Bal, Satarupa
2013-09-01
This article presents a control strategy for a three-phase grid interactive voltage source inverter that links a renewable energy source to the utility grid through a LCL-type filter. An optimized LCL-type filter has been designed and modeled so as to reduce the current harmonics in the grid, considering the conduction and switching losses at constant modulation index (Ma). The control strategy adopted here decouples the active and reactive power loops, thus achieving desirable performance with independent control of active and reactive power injected into the grid. The startup transients can also be controlled by the implementation of this proposed control strategy: in addition to this, optimal LCL filter with lesser conduction and switching copper losses as well as core losses. A trade-off has been made between the total losses in the LCL filter and the Total Harmonic Distortion (THD%) of the grid current, and the filter inductor has been designed accordingly. In order to study the dynamic performance of the system and to confirm the analytical results, the models are simulated in the MATLAB/Simulink environment, and the results are analyzed.
Design Optimization of Vena Cava Filters: An application to dual filtration devices
Singer, M A; Wang, S L; Diachin, D P
2009-12-03
Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.
NASA Astrophysics Data System (ADS)
Dougherty, Edward R.; Loce, Robert P.
1993-04-01
The hit-or-miss operator is used as the building block of optimal binary restoration filters. Filter design methodologies are given for general-, and maximum-, and minimum-noise environments, the latter two producing optimal thinning and thickening filters, respectively, and for iterative filters. The approach is based on the expression of translation-invariant filters as unions of hit-or-miss transforms. Unions of hit-or-miss transforms are expressed as canonical logical sums of products, and the final hit-or-miss templates are obtained by logic reduction. The net effect is a morphological representation and estimation of the conditional expectation, which is the overall optimal mean-absolute-error filter.
Fast automatic estimation of the optimization step size for nonrigid image registration
NASA Astrophysics Data System (ADS)
Qiao, Y.; Lelieveldt, B. P. F.; Staring, M.
2014-03-01
Image registration is often used in the clinic, for example during radiotherapy and image-guide surgery, but also for general image analysis. Currently, this process is often very slow, yet for intra-operative procedures the speed is crucial. For intensity-based image registration, a nonlinear optimization problem should be solved, usually by (stochastic) gradient descent. This procedure relies on a proper setting of a parameter which controls the optimization step size. This parameter is difficult to choose manually however, since it depends on the input data, optimization metric and transformation model. Previously, the Adaptive Stochastic Gradient Descent (ASGD) method has been proposed that automatically chooses the step size, but it comes at high computational cost. In this paper, we propose a new computationally efficient method to automatically determine the step size, by considering the observed distribution of the voxel displacements between iterations. A relation between the step size and the expectation and variance of the observed distribution is then derived. Experiments have been performed on 3D lung CT data (19 patients) using a nonrigid B-spline transformation model. For all tested dissimilarity metrics (mean squared distance, normalized correlation, mutual information, normalized mutual information), we obtained similar accuracy as ASGD. Compared to ASGD whose estimation time is progressively increasing with the number of parameters, the estimation time of the proposed method is substantially reduced to an almost constant time, from 40 seconds to no more than 1 second when the number of parameters is 105.
A thin-film bulk acoustic resonator and filter with optimal edge shapes for mass production.
Hara, Motoaki; Ueda, Masanori; Satoh, Yoshio
2013-01-01
The manufacturing conditions of a thin-film bulk acoustic resonator (FBAR) filter were investigated to obtain a high Q factor which is stable for mass production. The FBAR consist of patterned electrodes and piezoelectric films. In this study, the influence of edge shape of the films on the anti-resonance characteristics was investigated using a numerical method. Optimized shape was applied to a 2.5-GHz band resonator and filter. As a result, significant improvement of the Q factor and the insertion loss was confirmed. PMID:22609327
Preparation and optimization of the laser thin film filter
NASA Astrophysics Data System (ADS)
Su, Jun-hong; Wang, Wei; Xu, Jun-qi; Cheng, Yao-jin; Wang, Tao
2014-08-01
A co-colored thin film device for laser-induced damage threshold test system is presented in this paper, to make the laser-induced damage threshold tester operating at 532nm and 1064nm band. Through TFC simulation software, a film system of high-reflection, high -transmittance, resistance to laser damage membrane is designed and optimized. Using thermal evaporation technique to plate film, the optical properties of the coating and performance of the laser-induced damage are tested, and the reflectance and transmittance and damage threshold are measured. The results show that, the measured parameters, the reflectance R >= 98%@532nm, the transmittance T >= 98%@1064nm, the laser-induced damage threshold LIDT >= 4.5J/cm2 , meet the design requirements, which lays the foundation of achieving laser-induced damage threshold multifunction tester.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
Chuluunbaatar, Z.; Wang, C.; Kim, N. Y.
2014-01-01
This paper reports a compact bandpass filter with improved skirt selectivity using integrated passive device fabrication technology on a GaAs substrate. The structure of the filter consists of electromagnetically coupled meandered-line symmetric stepped-impedance resonators. The strength of the coupling between the resonators is enhanced by using a meandered-line stub-load inside the resonators to improve the selectivity and miniaturize the size of the filter. In addition, the center frequency of the filter can be flexibly controlled by varying degrees of the capacitive coupling between resonator and stub-load. To verify the proposed concept, a protocol bandpass filter with center frequency of 6.53?GHz was designed, fabricated, and measured, with a return loss and insertion loss of 39.1?dB and 1.63?dB. PMID:25386617
Geffrey K. Ottman; Heath F. Hofmann; George A. Lesieutre
2003-01-01
An optimized method of harvesting vibrational energy with a piezoelectric element using a step-down DC-DC converter is presented. In this configuration, the converter regulates the power flow from the piezoelectric element to the desired electronic load. Analysis of the converter in discontinuous current conduction mode results in an expression for the duty cycle-power relationship. Using parameters of the mechanical system,
Veneris, Andreas
to previous ATPG/simulationÂbased optimization methods. Andreas Veneris Magdy S. Abadir Ibrahim N. Hajj Toronto, ON M5S 3G4 Austin, TX 78729 Urbana, IL 61801 veneris@eecg.toronto.edu m.abadir@motorola.com hajj
Carlsson, Fredrik
2008-09-15
A method for generating a sequence of intensity-modulated radiation therapy step-and-shoot plans with increasing number of segments is presented. The objectives are to generate high-quality plans with few, large and regular segments, and to make the planning process more intuitive. The proposed method combines segment generation with direct step-and-shoot optimization, where leaf positions and segment weights are optimized simultaneously. The segment generation is based on a column generation approach. The method is evaluated on a test suite consisting of five head-and-neck cases and five prostate cases, planned for delivery with an Elekta SLi accelerator. The adjustment of segment shapes by direct step-and-shoot optimization improves the plan quality compared to using fixed segment shapes. The improvement in plan quality when adding segments is larger for plans with few segments. Eventually, adding more segments contributes very little to the plan quality, but increases the plan complexity. Thus, the method provides a tool for controlling the number of segments and, indirectly, the delivery time. This can support the planner in finding a sound trade-off between plan quality and treatment complexity.
Implicit application of polynomial filters in a k-step Arnoldi method
NASA Technical Reports Server (NTRS)
Sorensen, D. C.
1990-01-01
The Arnoldi process is a well known technique for approximating a few eigenvalues and corresponding eigenvectors of a general square matrix. Numerical difficulties such as loss of orthogonality and assessment of the numerical quality of the approximations as well as a potential for unbounded growth in storage have limited the applicability of the method. These issues are addressed by fixing the number of steps in the Arnoldi process at a prescribed value k and then treating the residual vector as a function of the initial Arnoldi vector. This starting vector is then updated through an iterative scheme that is designed to force convergence of the residual to zero. The iterative scheme is shown to be a truncation of the standard implicitly shifted QR-iteration for dense problems and it avoids the need to explicitly restart the Arnoldi sequence. The main emphasis of this paper is on the derivation and analysis of this scheme. However, there are obvious ways to exploit parallelism through the matrix-vector operations that comprise the majority of the work in the algorithm. Preliminary computational results are given for a few problems on some parallel and vector computers.
Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario
NASA Astrophysics Data System (ADS)
Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.
2009-12-01
Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.
Junghyun Kwon; Kyoung Mu Lee; Frank C. Park
2009-01-01
We propose a geometric method for visual tracking, in which the 2-D affine motion of a given object template is estimated in a video sequence by means of coordinate-invariant particle filtering on the 2-D affine group Aff(2). Tracking performance is further enhanced through a geometrically defined optimal importance function, obtained explicitly via Taylor expansion of a principal component analysis based
Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination
NASA Technical Reports Server (NTRS)
Downie, John D.
1992-01-01
Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.
Experimental study on optimization of the working conditions of excited state Faraday filter
NASA Astrophysics Data System (ADS)
Zhang, Liang; Tang, Junxiong
1998-07-01
In this paper the existence of optimal frequency detuning in the pumping process of the excited state Faraday anomalous dispersion optical filter (ESFADOF, also referred as active FADOF) is reported. We measured this detuning and its variation versus cell temperature. Moreover, the dependence of the ESFADOF transmission on the cell temperature and pumping power was also studied experimentally. On the basis of these results, the transmission of rubidium 775.9 nm ESFADOF was raised by more than two orders of magnitude under optimized working conditions.
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1998-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1999-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.
Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy.
Wang, Ke; Qiu, Ping
2015-05-01
Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration. PMID:25950644
Optimization of a blanching step to maximize sulforaphane synthesis in broccoli florets.
Pérez, Carmen; Barrientos, Herna; Román, Juan; Mahn, Andrea
2014-02-15
A blanching step was designed to favor sulforaphane synthesis in broccoli. Blanching was optimised through a central composite design, and the effects of temperature (50-70 °C) and immersion time in water (5-15 min) on the content of total glucosinolates, glucoraphanin, sulforaphane, and myrosinase activity were determined. Results were analysed by ANOVA and the optimal condition was determined through response surface methodology. Temperature between 50 and 60 °C significantly increased sulforaphane content (p<0.05), whilst blanching at 70 and 74 °C diminished significantly this content, compared to fresh broccoli. The optimal blanching conditions given by the statistical model were immersion in water at 57 °C for 13 min; coinciding with the minimum glucosinolates and glucoraphanin content, and with the maximum myrosinase activity. In the optimal conditions, the predicted response of 4.0 ?mol sulforaphane/g dry matter was confirmed experimentally. This value represents a 237% increase with respect to the fresh vegetable. PMID:24128476
Liu, Yang
2012-07-16
amounts of skews. The implementation of bandpass filtering on forwarded-clock path is able to control the JTB through the controlling of Q. This work introduces a method using bandpass filtering to optimize the JTB in high-speed forwarded...
Condat, Laurent
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. X, NO. XX, 2010 1 A New Color Filter Array with Optimal of the features provided in digital cameras. The heart of a digital still or video camera is its sensor, a 2-D
NASA Astrophysics Data System (ADS)
Zhang, De-Jia
2009-07-01
With the fast development of Internet, many systems have emerged in e-commerce applications to support the product recommendation. Collaborative filtering is one of the most promising techniques in recommender systems, providing personalized recommendations to users based on their previously expressed preferences in the form of ratings and those of other similar users. In practice, with the adding of user and item scales, user-item ratings are becoming extremely sparsity and recommender systems utilizing traditional collaborative filtering are facing serious challenges. To address the issue, this paper presents an approach to compute item genre similarity, through mapping each item with a corresponding descriptive genre, and computing similarity between genres as similarity, then make basic predictions according to those similarities to lower sparsity of the user-item ratings. After that, item-based collaborative filtering steps are taken to generate predictions. Compared with previous methods, the presented collaborative filtering employs the item genre similarity can alleviate the sparsity issue in the recommender systems, and can improve accuracy of recommendation.
Miller, Travis Reed
2010-01-01
This work aimed to inform the design of ceramic pot filters to be manufactured by the organization Pure Home Water (PHW) in Northern Ghana, and to model the flow through an innovative paraboloid-shaped ceramic pot filter. ...
Yang, Hongshun; Wang, Yifen; Jiang, Mingkang; Oh, Jun-Hyun; Herring, Josh; Zhou, Peng
2007-05-01
To optimize the extraction of gelatin from channel catfish (Ictalurus punctatus) skin, a 2-step response surface methodology involving a central composite design was adopted for the extraction process. After screening experiments, concentration of NaOH, alkaline pretreatment time, concentration of acetic acid, and extraction temperature were selected as the independent variables. In the 1st step of the optimization the dependent variables were protein yield (YP), gel strength (GS), and viscosity (V). Seven sets of optimized conditions were selected from the 1st step for the 2nd-step screen. Texture profile analysis and the 3 dependent variables from the 1st step were used as responses in the 2nd-step optimization. After the 2nd-step optimization, the most suitable conditions were 0.20 M NaOH pretreatment for 84 min, followed by a 0.115 M acetic acid extraction at 55 degrees C. The optimal values obtained from these conditions were YP = 19.2%, GS = 252 g, and V = 3.23 cP. The gelatin obtained also showed relatively good hardness, cohesiveness, springiness, and chewiness. The yield of protein and viscosity can be predicted by a quadratic and a linear model, respectively. PMID:17995759
Cao Daliang; Earl, Matthew A.; Luan, Shuang; Shepard, David M.
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases were selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.
Suleimanov, Yury V
2015-01-01
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation single- and double-ended transition-state optimization algorithms - the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the possibility of discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.
Grondin, V.; Roskey, M.; Klinger, K.; Shuber, T.
1994-09-01
The PCR technique is one of the most powerful tools in modern molecular genetics and has achieved widespread use in the analysis of genetic diseases. Typically, a region of interest is amplified from genomic DNA or cDNA and examined by various methods of analysis for mutations or polymorphisms. In cases of small genes and transcripts, amplification of single, small regions of DNA are sufficient for analysis. However, when analyzing large genes and transcripts, multiple PCRs may be required to identify the specific mutation or polymorphism of interest. Ever since it has been shown that PCR could simultaneously amplify multiple loci in the human dystrophin gene, multiplex PCR has been established as a general technique. The properities of multiplex PCR make it a useful tool and preferable to simultaneous uniplex PCR in many instances. However, the steps for developing a multiplex PCR can be laborious, with significant difficulty in achieving equimolar amounts of several different amplicons. We have developed a simple method of primer design that has enabled us to eliminate a number of the standard optimization steps required in developing a multiplex PCR. Sequence-specific oligonucleotide pairs were synthesized for the simultaneous amplification of multiple exons within the CFTR gene. A common non-complementary 20 nucleotide sequence was attached to each primer, thus creating a mixture of primer pairs all containing a universal primer sequence. Multiplex PCR reactions were carried out containing target DNA, a mixture of several chimeric primer pairs and primers complementary to only the universal portion of the chimeric primers. Following optimization of conditions for the universal primer, limited optimization was needed for successful multiplex PCR. In contrast, significant optimization of the PCR conditions were needed when pairs of sequence specific primers were used together without the universal sequence.
Hybrid Control Method for Optimal Transient Response and Output Filter Minimization for Buck energy transfer converters (buck or forward). In this case, the time-optimal response always results transfer systems, such as a boost, buck-boost, or flyback converters, this is not the case. Here, the time
NASA Technical Reports Server (NTRS)
U-Yen, Kongpop; Wollack, Ed; Papapolymerou, John; Laskar, Joy
2005-01-01
We propose an ultra compact single-layer spurious suppression band pass filter design which has the following benefit: 1) Effective coupling area can be increased with no fabrication limitation and no effect on the spurious response; 2) Two fundamental poles are introduced to suppress spurs; 3) Filter can be designed with up to 30% bandwidth; 4) The Filter length is reduced by at least 100% when compared to the conventional filter; 5) Spurious modes are suppressed up to at the seven times the fundamental frequency; and 6) It uses only one layer of metallization which minimize the fabrication cost.
Niederhauser, Thomas; Wyss-Balmer, Thomas; Haeberlin, Andreas; Marisa, Thanks; Wildhaber, Reto A; Goette, Josef; Jacomet, Marcel; Vogel, Rolf
2015-06-01
Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here, we present a graphics processor unit (GPU)-based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to autoregressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and four times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a seven-day high-resolution ECG is computed within less than 3 s. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced. PMID:25675449
Kim, Nam-Young
2013-01-01
This paper presents a symmetric-type microstrip triple-band bandstop filter incorporating a tri-section meandered-line stepped impedance resonator (SIR). The length of each section of the meandered line is 0.16, 0.15, and 0.83 times the guided wavelength (?g), so that the filter features three stop bands at 2.59?GHz, 6.88?GHz, and 10.67?GHz, respectively. Two symmetric SIRs are employed with a microstrip transmission line to obtain wide bandwidths of 1.12, 1.34, and 0.89?GHz at the corresponding stop bands. Furthermore, an equivalent circuit model of the proposed filter is developed, and the model matches the electromagnetic simulations well. The return losses of the fabricated filter are measured to be ?29.90?dB, ?28.29?dB, and ?26.66?dB while the insertion losses are 0.40?dB, 0.90?dB, and 1.10?dB at the respective stop bands. A drastic reduction in the size of the filter was achieved by using a simplified architecture based on a meandered-line SIR. PMID:24319367
One step memory of group reputation is optimal to promote cooperation in public goods games
NASA Astrophysics Data System (ADS)
Li, Aming; Wu, Te; Cong, Rui; Wang, Long
2013-08-01
Individuals' change of social ties has been observed to promote cooperation under specific mechanism, such as success-driven or expectation-driven migration. However, there is no clear criterion or information from players' instinctive memory or experience for them to consult as they would like to change their social ties. For the first time we define the reputation of a group based on individual's memory law. A model is proposed, where all players are endowed with the capacity to adjust interaction ambience involved if the reputation of their environment fails to satisfy their expectations. Simulation results show that cooperation decays as the increase of player's memory depth and one step memory is optimal to promote cooperation, which provides a potential interpretation for that most species memorize their reciprocators over very short time scales. Of intrigue is the result that cooperation can be improved greatly at an optimal interval of moderate expectation. Moreover, cooperation can be established and stabilized within a wide range of model parameters even when players choose their new partners randomly under the combination of reputation and group switching mechanisms. Our work validates the fact that individuals' short memory or experience within a multi-players group acts as an effective ingredient to boost cooperation.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi
2013-09-01
A hydrodynamically levitated centrifugal blood pump with a semi-open impeller has been developed for mechanical circulatory assistance. However, a narrow bearing gap has the potential to cause hemolysis. The purpose of the present study is to optimize the geometric configuration of the hydrodynamic step bearing in order to reduce hemolysis by expansion of the bearing gap. First, a numerical analysis of the step bearing, based on lubrication theory, was performed to determine the optimal design. Second, in order to assess the accuracy of the numerical analysis, the hydrodynamic forces calculated in the numerical analysis were compared with those obtained in an actual measurement test using impellers having step lengths of 0%, 33%, and 67% of the vane length. Finally, a bearing gap measurement test and a hemolysis test were performed. As a result, the numerical analysis revealed that the hydrodynamic force was the largest when the step length was approximately 70%. The hydrodynamic force calculated in the numerical analysis was approximately equivalent to that obtained in the measurement test. In the measurement test and the hemolysis test, the blood pump having a step length of 67% achieved the maximum bearing gap and reduced hemolysis, as compared with the pumps having step lengths of 0% and 33%. It was confirmed that the numerical analysis of the step bearing was effective, and the developed blood pump having a step length of approximately 70% was found to be a suitable configuration for the reduction of hemolysis. PMID:23834855
The optimal filters for the construction of the ensemble pulsar time
Alexander E. Rodin
2008-07-08
The algorithm of the ensemble pulsar time based on the optimal Wiener filtration method has been constructed. This algorithm allows the separation of the contributions to the post-fit pulsar timing residuals of the atomic clock and pulsar itself. Filters were designed with the use of the cross- and autocovariance functions of the timing residuals. The method has been applied to the timing data of millisecond pulsars PSR B1855+09 and PSR B1937+21 and allowed the filtering out of the atomic scale component from the pulsar data. Direct comparison of the terrestrial time TT(BIPM06) and the ensemble pulsar time PT$_{\\rm ens}$ revealed that fractional instability of TT(BIPM06)--PT$_{\\rm ens}$ is equal to $\\sigma_z=(0.8\\pm 1.9)\\cdot 10^{-15}$. Based on the $\\sigma_z$ statistics of TT(BIPM06)--PT$_{\\rm ens}$ a new limit of the energy density of the gravitational wave background was calculated to be equal to $\\Omega_g h^2 \\sim 3\\cdot 10^{-9}$.
Application of digital tomosynthesis (DTS) of optimal deblurring filters for dental X-ray imaging
NASA Astrophysics Data System (ADS)
Oh, J. E.; Cho, H. S.; Kim, D. S.; Choi, S. I.; Je, U. K.
2012-04-01
Digital tomosynthesis (DTS) is a limited-angle tomographic technique that provides some of the tomographic benefits of computed tomography (CT) but at reduced dose and cost. Thus, the potential for application of DTS to dental X-ray imaging seems promising. As a continuation of our dental radiography R&D, we developed an effective DTS reconstruction algorithm and implemented it in conjunction with a commercial dental CT system for potential use in dental implant placement. The reconstruction algorithm employed a backprojection filtering (BPF) method based upon optimal deblurring filters to suppress effectively both the blur artifacts originating from the out-focus planes and the high-frequency noise. To verify the usefulness of the reconstruction algorithm, we performed systematic simulation works and evaluated the image characteristics. We also performed experimental works in which DTS images of enhanced anatomical resolution were successfully obtained by using the algorithm and were promising to our ongoing applications to dental X-ray imaging. In this paper, our approach to the development of the DTS reconstruction algorithm and the results are described in detail.
An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter
NASA Astrophysics Data System (ADS)
Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning
2015-08-01
An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/??.
NASA Astrophysics Data System (ADS)
Takeda, Yasuhiko; Iizuka, Hideo; Ito, Tadashi; Mizuno, Shintaro; Hasegawa, Kazuo; Ichikawa, Tadashi; Ito, Hiroshi; Kajino, Tsutomu; Higuchi, Kazuo; Ichiki, Akihisa; Motohiro, Tomoyoshi
2015-08-01
We have theoretically investigated photovoltaic cells used under the illumination condition of monochromatic light incident from a particular direction, which is very different from that for solar cells under natural sunlight, using detailed balance modeling. A multilayer bandpass filter formed on the surface of the cell has been found to trap the light generated by radiative recombination inside the cell, reduce emission from the cell, and consequently improve conversion efficiency. The light trapping mechanism is interpreted in terms of a one-dimensional photonic crystal, and the design guide to optimize the multilayer structure has been clarified. For obliquely incident illumination, as well as normal incidence, a significant light trapping effect has been achieved, although the emission patterns are extremely different from each other depending on the incident directions.
Keresnyei, Róbert; Megyeri, Péter; Zidarics, Zoltán; Hejjel, László
2015-01-01
The availability of microcomputer-based portable devices facilitates the high-volume multichannel biosignal acquisition and the analysis of their instantaneous oscillations and inter-signal temporal correlations. These new, non-invasively obtained parameters can have considerable prognostic or diagnostic roles. The present study investigates the inherent signal delay of the obligatory anti-aliasing filters. One cycle of each of the 8 electrocardiogram (ECG) and 4 photoplethysmogram signals from healthy volunteers or artificially synthesised series were passed through 100-80-60-40-20?Hz 2-4-6-8th order Bessel and Butterworth filters digitally synthesized by bilinear transformation, that resulted in a negligible error in signal delay compared to the mathematical model of the impulse- and step responses of the filters. The investigated filters have as diverse a signal delay as 2-46?ms depending on the filter parameters and the signal slew rate, which is difficult to predict in biological systems and thus difficult to compensate for. Its magnitude can be comparable to the examined phase shifts, deteriorating the accuracy of the measurement. As a conclusion, identical or very similar anti-aliasing filters with lower orders and higher corner frequencies, oversampling, and digital low pass filtering are recommended for biosignal acquisition intended for inter-signal phase shift analysis. PMID:25514627
Three-dimensional optical analyses and optimizations of a vertical alignment color-filters-dimensional optical analyses and optimizations of a vertical alignment color-filters-embedded liquid of China 2 Center for Display Research, The Hong Kong University of Science and Technology, Clear Water Bay
Moon, Un-Ku
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS--II: EXPRESS BRIEFS, VOL. 51, NO. 3, MARCH 2004 105 Continuous-Time Filter Design Optimized for Reduced Die Area Charles Myers, Student Member, IEEE, Brandon for distributing capacitor and resistor area to optimally reduce die area in a given continuous-time filter design
Hongbing Zhou; Bing Zeng; Y. Neuvo
1991-01-01
A recently developed approach to synthesizing optimal stack filters under the mean absolute error (MAE) criterion (see B. Zong et al., 1991) is extended to FIR (finite impulse response) stack hybrid (FSH) filters, where FIR filters are assumed to be the average filter. This approach is demonstrated through synthesizing a group of optimal FSH filters in the minimum MAE sense
NASA Astrophysics Data System (ADS)
Hendrix, Charles D.; Vijaya Kumar, B. V. K.
1994-06-01
Correlation filters with three transmittance levels (+1, 0, and -1) are of interest in optical pattern recognition because they can be implemented on available spatial light modulators and because the zero level allows us to include a region of support (ROS). The ROS can provide additional control over the filter's noise tolerance and peak sharpness. A new algorithm based on optimizing a compromise average performance measure (CAPM) is proposed for designing three-level composite filters. The performance of this algorithm is compared to other three-level composite filter designs using a common image database and using figures of merit such as the Fisher ratio, error rate, and light efficiency. It is shown that the CAPM algorithm yields better results.
Maj, Jean-Baptiste; Royackers, Liesbeth; Moonen, Marc; Wouters, Jan
2005-09-01
In this paper, the first real-time implementation and perceptual evaluation of a singular value decomposition (SVD)-based optimal filtering technique for noise reduction in a dual microphone behind-the-ear (BTE) hearing aid is presented. This evaluation was carried out for a speech weighted noise and multitalker babble, for single and multiple jammer sound source scenarios. Two basic microphone configurations in the hearing aid were used. The SVD-based optimal filtering technique was compared against an adaptive beamformer, which is known to give significant improvements in speech intelligibility in noisy environment. The optimal filtering technique works without assumptions about a speaker position, unlike the two-stage adaptive beamformer. However this strategy needs a robust voice activity detector (VAD). A method to improve the performance of the VAD was presented and evaluated physically. By connecting the VAD to the output of the noise reduction algorithms, a good discrimination between the speech-and-noise periods and the noise-only periods of the signals was obtained. The perceptual experiments demonstrated that the SVD-based optimal filtering technique could perform as well as the adaptive beamformer in a single noise source scenario, i.e., the ideal scenario for the latter technique, and could outperform the adaptive beamformer in multiple noise source scenarios. PMID:16189969
Trumpf, Jochen
Second-Order-Optimal Filters on Lie groups Alessandro Saccon, Jochen Trumpf, Robert Mahony, A for state estimation of systems on general Lie groups with disturbed measurements of inputs and outputs orthogonal group SO(3), and where we choose the symmetric Cartan-Schouten (0)-connection, the resulting
Rathinam, Muruhan
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 5, NO. 5, SEPTEMBER 1994 803 Synthetic Approach in cooperation with SPIE and INNS, with the IEEE Neural Network Council as a participating Society. The author to Optimal Filtering James Ting-Ho Lo, Member, IEEE Abstract-As opposedto the analytic approachused
Morsch, G; Maywald, F; Wanner, C
1995-02-01
Heparin-induced extracorporeal LDL-precipitation has successfully been used in lowering high serum cholesterol in patients with coronary heart disease due to familial hypercholesterolemia. In the continuous search to improve treatment facilities we attempted to increase the filtration capacity of low density lipoprotein by the precipitate filter of the H.E.L.P.-system. Therefore, the nominal filtration area was varied by decreasing the pleat count as well as the configuration of the precipitation filter membrane. The device was tested in vitro using fresh frozen plasma from healthy donors as well as in vivo treating 6 patients suffering from familial hypercholesterolemia and 18 patients with the nephrotic syndrome. The data show that the drop in pressure on the filtration side depended on the filtration volume and the filter configuration. Reducing the pleat number and in parallel the nominal filtration area by replacing the standard support on the outside of the membrane by a double or triple thick support results in an increase in plasma volume which can be treated. The optimal results were obtained using filters with a nominal filtration area of 0.8 +/- 0.1 m2 and a triple thick support on the retention side compared to the standard configuration. Therefore, even patients with extreme plasma viscosity such as patients with the nephrotic syndrome can effectively be treated by heparin-induced LDL-precipitation. PMID:7766149
NASA Technical Reports Server (NTRS)
Stewart, Elwood C.
1961-01-01
The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.
Filtering of Defects in Semipolar (11-22) GaN Using 2-Steps Lateral Epitaxial Overgrowth.
Kriouche, N; Leroux, M; Vennéguès, P; Nemoz, M; Nataf, G; de Mierry, P
2010-01-01
Good-quality (11-22) semipolar GaN sample was obtained using epitaxial lateral overgrowth. The growth conditions were chosen to enhance the growth rate along the [0001] inclined direction. Thus, the coalescence boundaries stop the propagation of basal stacking faults. The faults filtering and the improvement of the crystalline quality were attested by transmission electron microscopy and low temperature photoluminescence. The temperature dependence of the luminescence polarization under normal incidence was also studied. PMID:21170140
Filtering of Defects in Semipolar (11?22) GaN Using 2-Steps Lateral Epitaxial Overgrowth
2010-01-01
Good-quality (11?22) semipolar GaN sample was obtained using epitaxial lateral overgrowth. The growth conditions were chosen to enhance the growth rate along the [0001] inclined direction. Thus, the coalescence boundaries stop the propagation of basal stacking faults. The faults filtering and the improvement of the crystalline quality were attested by transmission electron microscopy and low temperature photoluminescence. The temperature dependence of the luminescence polarization under normal incidence was also studied. PMID:21170140
Pauli Kuosmanen; Jaakko Astola
1996-01-01
Three new concepts — breakdown points, breakdown probabilities, and midpoint sensitivity curves — for stack filter analysis are introduced and analyzed in this paper. Breakdown points and probabilities can be used as measures of the robustness of stack filters. Midpoint sensitivity curves in turn give information on how sensitive the output of a stack filter is to the changes of
Particle Swarm Optimization aided unscented kalman filter for ballistic target tracking
Ravi Kumar Jatoth; D. N. Rao; K. S. Kumar
2010-01-01
Tracking of a ballistic target in its reentry phase by considering the radar measurements is a highly complex problem in nonlinear filtering. Kalman Filter (KF) is used to estimate the positions of the target when the measurements are corrupted with noise. If the measurements (range and bearing) are nonlinear then Unscented Kalman filter (UKF) can be used. For obtaining reliable
Optimization by decomposition: A step from hierarchic to non-hierarchic systems
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
A new, non-hierarchic decomposition is formulated for system optimization that uses system analysis, system sensitivity analysis, temporary decoupled optimizations performed in the design subspaces corresponding to the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems. The approach amounts to a variation of the well-known method of subspace optimization modified so that the analysis of the entire system is eliminated from the subspace optimization and the subspace optimizations may be performed concurrently.
NASA Astrophysics Data System (ADS)
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing
2014-12-01
Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.
Optimal ensemble size of ensemble Kalman filter in sequential soil moisture data assimilation
NASA Astrophysics Data System (ADS)
Yin, Jifu; Zhan, Xiwu; Zheng, Youfei; Hain, Christopher R.; Liu, Jicheng; Fang, Li
2015-08-01
The ensemble Kalman filter (EnKF) has been extensively applied in sequential soil moisture data assimilation to improve the land surface model performance and in turn weather forecast capability. Usually, the ensemble size of EnKF is determined with limited sensitivity experiments. Thus, the optimal ensemble size may have never been reached. In this work, based on a series of mathematical derivations, we demonstrate that the maximum efficiency of the EnKF for assimilating observations into the models could be reached when the ensemble size is set to 12. Simulation experiments are designed in this study under ensemble size cases 2, 5, 12, 30, 50, 100, and 300 to support the mathematical derivations. All the simulations are conducted from 1 June to 30 September 2012 over southeast USA (from -90°W, 30°N to -80°W, 40°N) at 25 km resolution. We found that the simulations are perfectly consistent with the mathematical derivation. This optical ensemble size may have theoretical implications on the implementation of EnKF in other sequential data assimilation problems.
Matched filter optimization of kSZ measurements with a reconstructed cosmological flow field
Li, Ming; White, Simon D M; Jasche, Jens
2014-01-01
We develop and test a new statistical method to measure the kinematic Sunyaev-Zel'dovich (kSZ) effect. A sample of independently detected clusters is combined with the cosmic flow field predicted from a galaxy redshift survey in order to derive a matched filter that optimally weights the kSZ signal for the sample as a whole given the noise involved in the problem. We apply this formalism to realistic mock microwave skies based on cosmological N-body simulations, and demonstrate its robustness and performance. In particular, we carefully assess the various sources of uncertainty, CMB primary fluctuations, instrumental noise, uncertainties in the determination of the velocity field, and effects introduced by miscentering of clusters and by scatter in the mass-observable relations. We show that available data (Planck maps and the MaxBCG catalogue) should deliver a $7.7\\sigma$ detection of the kSZ. A similar cluster catalogue with broader sky coverage should increase the detection significance to $\\sim 13\\sigma$....
Matched filter optimization of kSZ measurements with a reconstructed cosmological flow field
NASA Astrophysics Data System (ADS)
Li, Ming; Angulo, R. E.; White, S. D. M.; Jasche, J.
2014-09-01
We develop and test a new statistical method to measure the kinematic Sunyaev-Zel'dovich (kSZ) effect. A sample of independently detected clusters is combined with the cosmic flow field predicted from a galaxy redshift survey in order to derive a matched filter that optimally weights the kSZ signal for the sample as a whole given the noise involved in the problem. We apply this formalism to realistic mock microwave skies based on cosmological N-body simulations, and demonstrate its robustness and performance. In particular, we carefully assess the various sources of uncertainty, cosmic microwave background primary fluctuations, instrumental noise, uncertainties in the determination of the velocity field, and effects introduced by miscentring of clusters and by uncertainties of the mass-observable relation (normalization and scatter). We show that available data (Planck maps and the MaxBCG catalogue) should deliver a 7.7? detection of the kSZ. A similar cluster catalogue with broader sky coverage should increase the detection significance to ˜13?. We point out that such measurements could be binned in order to study the properties of the cosmic gas and velocity fields, or combined into a single measurement to constrain cosmological parameters or deviations of the law of gravity from General Relativity.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An optimal search filter for retrieving systematic reviews and meta-analyses
2012-01-01
Background Health-evidence.ca is an online registry of systematic reviews evaluating the effectiveness of public health interventions. Extensive searching of bibliographic databases is required to keep the registry up to date. However, search filters have been developed to assist in searching the extensive amount of published literature indexed. Search filters can be designed to find literature related to a certain subject (i.e. content-specific filter) or particular study designs (i.e. methodological filter). The objective of this paper is to describe the development and validation of the health-evidence.ca Systematic Review search filter and to compare its performance to other available systematic review filters. Methods This analysis of search filters was conducted in MEDLINE, EMBASE, and CINAHL. The performance of thirty-one search filters in total was assessed. A validation data set of 219 articles indexed between January 2004 and December 2005 was used to evaluate performance on sensitivity, specificity, precision and the number needed to read for each filter. Results Nineteen of 31 search filters were effective in retrieving a high level of relevant articles (sensitivity scores greater than 85%). The majority achieved a high degree of sensitivity at the expense of precision and yielded large result sets. The main advantage of the health-evidence.ca Systematic Review search filter in comparison to the other filters was that it maintained the same level of sensitivity while reducing the number of articles that needed to be screened. Conclusions The health-evidence.ca Systematic Review search filter is a useful tool for identifying published systematic reviews, with further screening to identify those evaluating the effectiveness of public health interventions. The filter that narrows the focus saves considerable time and resources during updates of this online resource, without sacrificing sensitivity. PMID:22512835
Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi
2011-01-01
We have developed a hydrodynamic levitation centrifugal blood pump with a semi-open impeller for a mechanically circulatory assist. The impeller levitated with original hydrodynamic bearings without any complicated control and sensors. However, narrow bearing gap has the potential for causing hemolysis. The purpose of the study is to investigate the geometric configuration of the hydrodynamic step bearing to minimize hemolysis by expansion of the bearing gap. Firstly, we performed the numerical analysis of the step bearing based on Reynolds equation, and measured the actual hydrodynamic force of the step bearing. Secondly, the bearing gap measurement test and the hemolysis test were performed to the blood pumps, whose step length were 0 %, 33 % and 67 % of the vane length respectively. As a result, in the numerical analysis, the hydrodynamic force was the largest, when the step bearing was around 70 %. In the actual evaluation tests, the blood pump having step 67 % obtained the maximum bearing gap, and was able to improve the hemolysis, compared to those having step 0% and 33%. We confirmed that the numerical analysis of the step bearing worked effectively, and the blood pump having step 67 % was suitable configuration to minimize hemolysis, because it realized the largest bearing gap. PMID:22254562
Sallum, Loriz Francisco; Soares, Frederico Luis Felipe; Ardila, Jorge Armando; Carneiro, Renato Lajarim
2014-01-01
Supported silver nanoparticles on filter paper were synthesized using Tollens' reagent. Experimental designs were performed to obtain the highest SERS enhancement factor by study of the influence of the parameters: filter paper pretreatment, type of filter paper, reactants concentration, reaction time and temperature. To this end, fractional factorial and central composite designs were used in order to optimize the synthesis for quantification of nicotinamide in the presence of excipients in a commercial sample of cosmetic. The values achieved for the optimal condition were 150 mM of ammonium hydroxide, 50 mM of silver nitrate, 500 mM of glucose, 8 min for the reaction time, 45 °C temperature, pretreatment with ammonium hydroxide and quantitative filter paper (1-2 µm). Despite the variation of SERS intensity, it was possible to use an adapted method of internal standard to obtain a calibration curve with good precision. The coefficient of determination of the linear fit was 0.97. The method proposed in this work was capable of quantifying nicotinamide on a commercial cosmetic gel, at low concentration levels, with a relative error of 1.06% compared to the HPLC. SERS spectroscopy presents faster analyses than HPLC, also complex sample preparation and large amount of reactants are not necessary. PMID:24274308
Optimization by decomposition: A step from hierarchic to non-hierarchic systems
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
A new, non-hierarchic decomposition is formulated for system optimization that uses system analysis, system sensitivity analysis, temporary decoupled optimizations performed in the design subspaces corresponding to the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems. The approach amounts to a variation of the well-known method of subspace optimization modified so that the analysis of the entire system is eliminated from the subspace optimization and the subspace optimizations may be performed concurrently.
Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes
NASA Astrophysics Data System (ADS)
Sanal, M.; Kuloor, R.; Sagayaraj, M. J.
In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.
Wang, S L; Singer, M A
2009-07-13
The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.
NASA Astrophysics Data System (ADS)
Namin, Frank Farhad A.
Quasicrystalline solids were first observed in nature in 1980s. Their lattice geometry is devoid of translational symmetry; however it possesses long-range order as well as certain orders of rotational symmetry forbidden by translational symmetry. Mathematically, such lattices are related to aperiodic tilings. Since their discovery there has been great interest in utilizing aperiodic geometries for a wide variety of electromagnetic (EM) and optical applications. The first thrust of this dissertation addresses applications of quasicrystalline geometries for wideband antenna arrays and plasmonic nano-spherical arrays. The first application considered is the design of suitable antenna arrays for micro-UAV (unmanned aerial vehicle) swarms based on perturbation of certain types of aperiodic tilings. Due to safety reasons and to avoid possible collision between micro-UAVs it is desirable to keep the minimum separation distance between the elements several wavelengths. As a result typical periodic planar arrays are not suitable, since for periodic arrays increasing the minimum element spacing beyond one wavelength will lead to the appearance of grating lobes in the radiation pattern. It will be shown that using this method antenna arrays with very wide bandwidths and low sidelobe levels can be designed. It will also be shown that in conjunction with a phase compensation method these arrays show a large degree of versatility to positional noise. Next aperiodic aggregates of gold nano-spheres are studied. Since traditional unit cell approaches cannot be used for aperiodic geometries, we start be developing new analytical tools for aperiodic arrays. A modified version of generalized Mie theory (GMT) is developed which defines scattering coefficients for aperiodic spherical arrays. Next two specific properties of quasicrystalline gold nano-spherical arrays are considered. The optical response of these arrays can be explained in terms of the grating response of the array (photonic resonance) and the plasmonic response of the spheres (plasmonic resonance). In particular the couplings between the photonic and plasmonic modes are studied. In periodic arrays this coupling leads to the formation of a so called photonic-plasmonic hybrid mode. The formation of hybrid modes is studied in quasicrystalline arrays. Quasicrystalline structures in essence possess several periodicities which in some cases can lead to the formation of multiple hybrid modes with wider bandwidths. It is also demonstrated that the performance of these arrays can be further enhanced by employing a perturbation method. The second property considered is local field enhancements in quasicrystalline arrays of gold nanospheres. It will be shown that despite a considerably smaller filling factor quasicrystalline arrays generate larger local field enhancements which can be even further enhanced by optimally placing perturbing spheres within the prototiles that comprise the aperiodic arrays. The second thrust of research in this dissertation focuses on designing all-dielectric filters and metamaterial coatings for the optical range. In higher frequencies metals tend to have a high loss and thus they are not suitable for many applications. Hence dielectrics are used for applications in optical frequencies. In particular we focus on designing two types of structures. First a near-perfect optical mirror is designed. The design is based on optimizing a subwavelength periodic dielectric grating to obtain appropriate effective parameters that will satisfy the desired perfect mirror condition. Second, a broadband anti-reflective all-dielectric grating with wide field of view is designed. The second design is based on a new computationally efficient genetic algorithm (GA) optimization method which shapes the sidewalls of the grating based on optimizing the roots of polynomial functions.
Perez Roman, Eduardo
2011-08-08
with a short half-life for diagnosis and treatment of patients. Nuclear medicine procedures are multi-step and have to be performed under restrictive time constraints. Consequently, managing patients in nuclear medicine clinics is a challenging problem...
Optimal filter design approaches to statistical process control for autocorrelated processes
Chin, Chang-Ho
2005-11-01
control charting methods can be viewed as the charting of the output of a linear filter applied to the process data. In this dissertation, we generalize the concept of linear filters for control charts and propose new control charting schemes, the general...
Yoon, Hye-Ran
2007-03-01
A rapid dried-filter paper plasma-spot analytical method was developed to quantify organic acids, amino acids, and glycines simultaneously in a two-step derivatization procedure with good sensitivity and specificity. The new method involves a two-step trimethylsilyl (TMS) - trifluoroacyl (TFA) derivatization procedure using GC-MS/ selective ion monitoring (GC-MS/SIM). The dried-filter paper plasma was fortified with an internal standard (tropate) as well as a standard mixture of distilled water and methanol. Methyl orange was added to the residue as an indicator. N-methyl-N-(trimethylsilyl-trifluoroacetamide) and N-methyl-bis-trifluoroacetamide were then added and heated to 60 degrees C for 10 and 15 min to produce the TMS and TFA derivatives, respectively. Using this method, the silylation of carboxylic functional groups was carried out, which was followed by the trifluoroacyl derivatization of the amino functional group. The derivatives were analyzed by GC-MS/SIM. A calibration cure showed a linear relationship for the target compounds between concentrations of 10-500 ng/mL. The limit of detection and quantification on a plasma spot were 10-90 ng/mL (S/N=9) and 80-500 ng/ mL, respectively. The correlation coefficient ranged from 0.938 and 0.999. When applied to the samples from positive patients, the method clearly differentiated normal subjects from the patients with various metabolic disorders such as PKU, MSUD, OTC and a Propionic Aciduria. The new developed method might be useful for making a rapid, sensitive and simultaneous diagnosis of inherited organic and amino acid disorders. In addition, this method is expected to be an alternative method for screening newborns for metabolic disorders in laboratories where expensive MS/MS is unavailable. PMID:17424948
Gill, K; Aldoohan, S; Collier, J
2014-06-01
Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measure CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.
NASA Astrophysics Data System (ADS)
Comani, S.; Mantini, D.; Alleva, G.; Di Luzio, S.; Romani, G. L.
2005-12-01
The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz.
Comani, S; Mantini, D; Alleva, G; Di Luzio, S; Romani, G L
2005-12-01
The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz. PMID:16306648
Bounds on the performance of particle filters
NASA Astrophysics Data System (ADS)
Snyder, C.; Bengtsson, T.
2014-12-01
Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. A persistent issue with all particle filters is degeneracy of the importance weights, where one or a few particles receive almost all the weight. Considering single-step filters such as the equivalent-weights or implicit particle filters (that is, those in which the particles and weights at time tk depend only on the observations at tk and the particles and weights at tk-1), two results provide a bound on their performance. First, the optimal proposal minimizes the variance of the importance weights not only over draws of the particles at tk, but also over draws from the joint proposal for tk-1 and tk. This shows that a particle filter using the optimal proposal will have minimal degeneracy relative to all other single-step filters. Second, the asymptotic results of Bengtsson et al. (2008) and Snyder et al. (2008) also hold rigorously for the optimal proposal in the case of linear, Gaussian systems. The number of particles necessary to avoid degeneracy must increase exponentially with the variance of the incremental importance weights. In the simplest examples, that variance is proportional to the dimension of the system, though in general it depends on other factors, including the characteristics of the observing network. A rough estimate indicates that single-step particle filter applied to global numerical weather prediction will require very large numbers of particles.
Ren, Kun; Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663
Optimizing Data Measurements at Test Beds Using Multi-Step Genetic Algorithms
Zell, Andreas
the engine in an optimal way. Statistical Design of Experiments (DOE) reduces the set of measuring points and the relative air mass flow ramf, results in oscillations of the total engine system. The measurements can only are changed. Our goal is therefore to minimize the oscillations A. Mitterer, BMW Group, D-80788 M
Single-channel noise reduction using unified joint diagonalization and optimal filtering
NASA Astrophysics Data System (ADS)
Nørholm, Sidsel Marie; Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-12-01
In this paper, the important problem of single-channel noise reduction is treated from a new perspective. The problem is posed as a filtering problem based on joint diagonalization of the covariance matrices of the desired and noise signals. More specifically, the eigenvectors from the joint diagonalization corresponding to the least significant eigenvalues are used to form a filter, which effectively estimates the noise when applied to the observed signal. This estimate is then subtracted from the observed signal to form an estimate of the desired signal, i.e., the speech signal. In doing this, we consider two cases, where, respectively, no distortion and distortion are incurred on the desired signal. The former can be achieved when the covariance matrix of the desired signal is rank deficient, which is the case, for example, for voiced speech. In the latter case, the covariance matrix of the desired signal is full rank, as is the case, for example, in unvoiced speech. Here, the amount of distortion incurred is controlled via a simple, integer parameter, and the more distortion allowed, the higher the output signal-to-noise ratio (SNR). Simulations demonstrate the properties of the two solutions. In the distortionless case, the proposed filter achieves only a slightly worse output SNR, compared to the Wiener filter, along with no signal distortion. Moreover, when distortion is allowed, it is possible to achieve higher output SNRs compared to the Wiener filter. Alternatively, when a lower output SNR is accepted, a filter with less signal distortion than the Wiener filter can be constructed.
NASA Astrophysics Data System (ADS)
Korovyanko, Oleg J.; Rey-de-Castro, Roberto; Elles, Christopher G.; Crowell, Robert A.; Li, Yuelin
2006-02-01
The temporal output of a Ti:Sapphire laser system has been optimized using an acousto-optic programmable dispersive filter and a genetic algorithm. In-situ recording the evolution of spectral phase, amplitude and temporal pulse profile for each iteration of the algorithm using SPIDER shows that we are able to lock the spectral phase of the laser pulse within a narrow margin. By using the second harmonic of the CPA laser as feedback for the genetic algorithm, it has been demonstrated that severe mismatch between the compressor and stretcher can be compensated for in a short period of time.
Optimization of 3D laser scanning speed by use of combined variable step
NASA Astrophysics Data System (ADS)
Garcia-Cruz, X. M.; Sergiyenko, O. Yu.; Tyrsa, Vera; Rivas-Lopez, M.; Hernandez-Balbuena, D.; Rodriguez-Quiñonez, J. C.; Basaca-Preciado, L. C.; Mercorelli, P.
2014-03-01
The problem of 3D TVS slow functioning caused by constant small scanning step becomes its solution in the presented research. It can be achieved by combined scanning step application for the fast search of n obstacles in unknown surroundings. Such a problem is of keynote importance in automatic robot navigation. To maintain a reasonable speed robots must detect dangerous obstacles as soon as possible, but all known scanners able to measure distances with sufficient accuracy are unable to do it in real time. So, the related technical task of the scanning with variable speed and precise digital mapping only for selected spatial sectors is under consideration. A wide range of simulations in MATLAB 7.12.0 of several variants of hypothetic scenes with variable n obstacles in each scene (including variation of shapes and sizes) and scanning with incremented angle value (0.6° up to 15°) is provided. The aim of such simulation was to detect which angular values of interval still permit getting the maximal information about obstacles without undesired time losses. Three of such local maximums were obtained in simulations and then rectified by application of neuronal network formalism (Levenberg-Marquradt Algorithm). The obtained results in its turn were applied to MET (Micro-Electro-mechanical Transmission) design for practical realization of variable combined step scanning on an experimental prototype of our previously known laser scanner.
Metrics for comparing plasma mass filters
Fetterman, Abraham J.; Fisch, Nathaniel J.
2011-10-15
High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.
Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7
Hemmers, Oliver
Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 10 Step 11 Step 12 Step 8 Step 9 VPAA gg 4 are submitted. YES Admissions open for program. Step 13 If pre-proposal is NOT approved, proposer and dean
Tavakoli, Behnoosh; Zhu, Quing
2013-01-01
Abstract. Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively. PMID:23296038
Bethel, E. Wes; Bethel, E. Wes
2012-01-06
This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.
Dondo, Rodolfo; Marqués, Dardo
2003-04-01
The computation of optimal control profiles for batch bioreactors is based on the use of simple and empirical dynamic models. Since these models present some level of uncertainty, the difference between the model dynamics and the reactor dynamics can have significant effects in the reliability of the calculated profile. To develop near optimal control trajectories considering this drawback, we propose to calculate successive control profiles on a moving time horizon using a mathematical model in which the kinetic parameters are estimated by an observer. The desired objective is to generate a near optimal control trajectory adapted to the "running" fermentation. This idea results in a nonlinear estimator plus an optimizer arrangement that so far has not been applied to batch fermentors. Numerical simulations are performed on xanthan-gum batch fermentations and reasonably good results are obtained. PMID:12708547
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.
2011-01-01
Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.
NASA Astrophysics Data System (ADS)
Macelloni, Leonardo; Battista, Bradley Matthew; Knapp, Camelia Cristina
2011-12-01
Gas-hydrate related processes in deep-water marine settings exist on spatial scales that challenge conventional seismic reflection profiling to successfully image them. The conventional approach to acoustic identification of buried hydrates is to use advanced, cost-prohibitive survey techniques and highly customized software to define subsurface structure wherein compositional changes may be modeled and/or interpreted. This study adopts a different approach derived from recent theoretical advancements in signal processing. The method consists in optimal filtering high resolution, single-channel seismic reflection data using the Empirical Mode Decomposition (EMD). The time series is decomposed in sub-components and the noisy portions are suppressed adopting the technique that we referred as Weighted Mode(s) EMD. The optimal filtering greatly improves the resolution and fidelity of the seismic data set. High Resolution single channel seismic profiles, acquired over a carbonatehydrates site in the northern Gulf of Mexico, manipulated in such way, show a complex, shallow subsurface, and suggest potential evidence for buried gas hydrates.
Optimal stack filtering and the estimation and structural approaches to image processing
E. J. Coyle; J.-H. Lin; M. Gabbouj
1989-01-01
Two approaches have been used in the past to design rank-order based nonlinear filters for enhancing or restoring images: the structural approach and the estimation approach. The first approach requires structural descriptions of the image and the process which has altered it, whereas the second requires statistical descriptions of the image and the process which has altered it. The many
A New Optimal Hatch Filter to Minimize the Effects of Ionosphere Gradients for GBAS
Huang Zhenggang; Huang Zhigang; Zhu Yanbo
2008-01-01
At present, the main problem faced by ground-based augment system (GBAS) is that though carrier smoothing filter and local differential global positioning system (LDGPS) improve the accuracy of the pseudorange by reducing the noise in it and eliminating almost all the common errors between the user and the reference station, they also cause extra errors on account of the effects
Improved design and optimization of subsurface flow constructed wetlands and sand filters
A. Brovelli; O. Carranza-Díaz; L. Rossi; D. A. Barry
2010-01-01
Subsurface flow constructed wetlands and sand filters are engineered systems capable of eliminating a wide range of pollutants from wastewater. These devices are easy to operate, flexible and have low maintenance costs. For these reasons, they are particularly suitable for small settlements and isolated farms and their use has substantially increased in the last 15 years. Furthermore, they are also
Bushway, Paul J.; Azimi, Behrad; Heynen-Genel, Susanne
2014-01-01
The standard (STD) 5 × 5 hybrid median filter (HMF) was previously described as a nonparametric local backestimator of spatially arrayed microtiter plate (MTP) data. As such, the HMF is a useful tool for mitigating global and sporadic systematic error in MTP data arrays. Presented here is the first known HMF correction of a primary screen suffering from systematic error best described as gradient vectors. Application of the STD 5 × 5 HMF to the primary screen raw data reduced background signal deviation, thereby improving the assay dynamic range and hit confirmation rate. While this HMF can correct gradient vectors, it does not properly correct periodic patterns that may present in other screening campaigns. To address this issue, 1 × 7 median and a row/column 5 × 5 hybrid median filter kernels (1 × 7 MF and RC 5 × 5 HMF) were designed ad hoc, to better fit periodic error patterns. The correction data show periodic error in simulated MTP data arrays is reduced by these alternative filter designs and that multiple corrective filters can be combined in serial operations for progressive reduction of complex error patterns in a MTP data array. PMID:21900202
Paul J. Bushway; Behrad Azimi; Susanne Heynen-Genel
2011-01-01
The standard (STD) 5 × 5 hybrid median filter (HMF) was previously described as a nonparametric local backestimator of spatially arrayed microtiter plate (MTP) data. As such, the HMF is a useful tool for mitigating global and sporadic systematic error in MTP data arrays. Presented here is the first known HMF correction of a primary screen suffering from systematic error
Cao, Hui; Shu, Xuewen; Atai, Javid; Gbadebo, Adenowo; Xiong, Bangyun; Fan, Ting; Tang, HaiShu; Yang, Weili; Yu, Yu
2014-12-15
We investigate return-to-zero (RZ) to non-return-to-zero (NRZ) format conversion by means of the linear time-invariant system theory. It is shown that the problem of converting random RZ stream to NRZ stream can be reduced to constructing an appropriate transfer function for the linear filter. This approach is then used to propose novel optimally-designed single fiber Bragg grating (FBG) filter scheme for RZ-OOK/DPSK/DQPSK to NRZ-OOK/DPSK/DQPSK format conversion. The spectral response of the FBG is designed according to the optical spectra of the algebraic difference between isolated NRZ and RZ pulses, and the filter order is optimized for the maximum Q-factor of the output NRZ signals. Experimental results as well as simulations show that such an optimally-designed FBG can successfully perform RZ-OOK/DPSK/DQPSK to NRZ-OOK/DPSK/DQPSK format conversion. PMID:25606990
NASA Astrophysics Data System (ADS)
Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2015-04-01
Nowadays, most hydrological catchment models are designed to allow their use for streamflow simulation at different time-scales. While this permits models to be applied for broader purposes, it can also be a source of error in hydrological processes simulation at catchment scale. Those errors seem not to affect significantly simple conceptual models, but this flexibility may lead to large behavior errors in physically based models. Equations used in processes such as those related to soil moisture time-variation are usually representative at certain time-scales but they may not characterize properly water transfer in soil layers at larger scales. This effect is especially relevant as we move from detailed hourly scale to daily time-step, which are common time scales used at catchment streamflow simulation for different research and management practices purposes. This study aims to provide an objective methodology to identify the degree of similarity of optimal parameter values when hydrological catchment model calibration is developed at different time-scales. Thus, providing information for an informed discussion of physical parameter significance on hydrological models. In this research, we analyze the influence of time scale simulation on: 1) the optimal values of six highly sensitive parameters of the TOPLATS model and 2) the streamflow simulation efficiency, while optimization is carried out at different time scales. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) has been applied on its lumped version on three catchments of varying size located in northern Spain. The model has its basis on shallow groundwater gradients (related to local topography) that set up spatial patterns of soil moisture and are assumed to control infiltration and runoff during storm events and evaporation and drainage in between storm events. The model calculates the saturated portion of the catchment at each time step based on Topographical Index (TI) intervals. Surface runoff is then calculated at rainfall events proportionally to the saturation degree of the catchment. Separately, baseflow is calculated based on the distance between catchment average water table depth and specific depth at each TI interval. This study focuses on the comparison of hourly and daily simulations for the 2000-2007 time period. An optimization algorithm has been applied to identify the optimal values of the following four soil properties: 1) Brooks-Corey pore size distribution index (?), 2) Bubbling pressure (?c), 3) Saturated soil moisture (?s), 4) Surface saturated hydraulic conductivity (Ks), and two subsurface flow controlling parameters: 1) Subsurface flow at complete saturation (Q0), and 2) Exponential coefficient for TOPMODEL baseflow equation (f). The algorithm was set up to maximize Nash-Sutcliffe Efficiency (NSE) at the catchment outlet. Results presented include the optimal values of each parameter at both hourly and daily time scale. These values provided valuable information to discuss the relative importance of each soil-related model parameter for enhanced streamflow simulation and adequate model response to both surface runoff and baseflow simulation. Catchment baseflow magnitude (Q0) and decay behavior (f) are also proved to require detailed analysis depending on the selected hydrological modeling purpose and corresponding selected time-step. Obtained results showed that different time-scale simulations may require different parameter values for soil properties and catchment behavior characterization in order to properly simulate streamflow at catchment scale. Despite calibrated parameters were soil properties and water flow quantities with physical meaning and defined units, optimum values differed with time-scale and were not always similar to field observations.
Comments on “A one-step optimal homotopy analysis method for nonlinear differential equations”
NASA Astrophysics Data System (ADS)
Marinca, V.; Heri?anu, N.
2010-11-01
The above mentioned paper contains some fundamental mistakes and misinterpretations along with a false conclusion. Applying the optimal homotopy asymptotic method (OHAM) in an incorrect manner, Niu and Wang have drawn the false conclusion that this approach is not efficient in practice because it is time-consuming for high-order of approximation. We emphasized the presence of some evident mistakes and misinterpretations in their paper and we proved that OHAM is very efficient in practice since we solved all three examples analyzed by Niu and Wang using only the first-order of approximation, which yields accurate results. We demonstrate that OHAM does not need high-orders of approximation as Niu and Wang suggests and we show that the main strength of OHAM is its rapid convergence, contradicting Niu and Wang's assumption.
He, Zuxing; Tan, Joo Shun; Lai, Oi Ming; Ariff, Arbakariya B
2015-08-15
In this study, the methods for extraction and purification of miraculin from Synsepalum dulcificum were investigated. For extraction, the effect of different extraction buffers (phosphate buffer saline, Tris-HCl and NaCl) on the extraction efficiency of total protein was evaluated. Immobilized metal ion affinity chromatography (IMAC) with nickel-NTA was used for the purification of the extracted protein, where the influence of binding buffer pH, crude extract pH and imidazole concentration in elution buffer upon the purification performance was explored. The total amount of protein extracted from miracle fruit was found to be 4 times higher using 0.5M NaCl as compared to Tris-HCl and phosphate buffer saline. On the other hand, the use of Tris-HCl as binding buffer gave higher purification performance than sodium phosphate and citrate-phosphate buffers in IMAC system. The optimum purification condition of miraculin using IMAC was achieved with crude extract at pH 7, Tris-HCl binding buffer at pH 7 and the use of 300 mM imidazole as elution buffer, which gave the overall yield of 80.3% and purity of 97.5%. IMAC with nickel-NTA was successfully used as a single step process for the purification of miraculin from crude extract of S. dulcificum. PMID:25794715
IMC Design Based Optimal Tuning of a PID-Filter Governor Controller for Hydro Power Plant
Anil Naik Kanasottu; Srikanth Pullabhatla; Venkata Reddy Mettu
\\u000a In the present paper a PID-Filter governor controller with Internal Model Control (IMC) tuning method for the hydro electric\\u000a power plant is presented. The IMC has a single tuning parameter to adjust the performance and robustness of the controller.\\u000a The proposed tuning method is very efficient in controlling the overshoot, stability and the dynamics of the speed-governing\\u000a system of the
Optimal spatial filtering of single trial EEG during imagined hand movement
H. Ramoser; J. Müller-gerking; G. Pfurtscheller
1998-01-01
The development of an EEG-based brain-computer interface (BCI) requires rapid and reliable discrimination of EEG patterns, e. g., associated with motor imagery. One sided hand movement imagination results in EEG changes located at contra- and ipsilateral central areas. We demonstrate that spatial filters for multi-channel EEG effectively extract discriminatory information from two populations of single-trial EEG, recorded during left and
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Beaulieua, Frédéric; Beaulieu, Luc; Tremblay, Daniel; Roy, René
2004-06-01
As an alternative between manual planning and beamlet-based IMRT, we have developed an optimization system for inverse planning with anatomy-based MLC fields. In this system, named Ballista, the orientation (table and gantry), the wedge filter and the field weights are simultaneously optimized for every beam. An interesting feature is that the system is coupled to Pinnacle3 by means of the PinnComm interface, and uses its convolution dose calculation engine. A fully automatic MLC segmentation algorithm is also included. The plan evaluation is based on a quasi-random sampling and on a quadratic objective function with penalty-like constraints. For efficiency, optimal wedge angles and wedge orientations are determined using the concept of the super-omni wedge. A bound-constrained quasi-Newton algorithm performs field weight optimization, while a fast simulated annealing algorithm selects the optimal beam orientations. Moreover, in order to generate directly deliverable plans, the following practical considerations have been incorporated in the system: collision between the gantry and the table as well as avoidance of the radio-opaque elements of a table top. We illustrate the performance of the new system on two patients. In a rhabdomyosarcoma case, the system generated plans improving both the target coverage and the sparing of the parotide, as compared to a manually designed plan. In the second case presented, the system successfully produced an adequate plan for the treatment of the prostate while avoiding both hip prostheses. For the many cases where full IMRT may not be necessary, the system efficiently generates satisfactory plans meeting the clinical objectives, while keeping the treatment verification much simpler. PMID:15259659
Optimizing flow rate and bacterial removal performance of ceramic pot filters in Tamale, Ghana
Zhang, Yiyue, S.M. Massachusetts Institute of Technology
2015-01-01
Pure Home Water (PHW) is an organization that seeks to improve the drinking water quality for those who do not have access to clean water in Northern Ghana. This study focuses on the further optimization of ceramic pot ...
Development of a Design Tool for Flow Rate Optimization in the Tata Swach Water Filter
Ricks, Sean T.
When developing a first-generation product, an iterative approach often yields the shortest time-to-market. In order to optimize its performance, however, a fundamental understanding of the theory governing its operation ...
Knabel, Stephen J
2002-01-01
A one-step, recovery-enrichment broth, optimized Penn State University (oPSU) broth, was developed to consistently detect low levels of injured and uninjured Listeria monocytogenes cells in ready-to-eat foods. The oPSU broth contains special selective agents that inhibit growth of background flora without inhibiting recovery of injured Listeria cells. After recovery in the anaerobic section of oPSU broth, Listeria cells migrated to the surface, forming a black zone. This migration separated viable from nonviable cells and the food matrix, thereby reducing inhibitors that prevent detection by molecular methods. The high Listeria-to-background ratio in the black zone resulted in consistent detection of low levels of L. monocytogenes in pasteurized foods by both cultural and molecular methods, and greatly reduced both false-negative and false-positive results. oPSU broth does not require transfer to a secondary enrichment broth, making it less laborious and less subject to external contamination than 2-step enrichment protocols. Addition of 150mM D-serine prevented germination of Bacillus spores, but not the growth of vegetative cells. Replacement of D-serine with 12 mg/L acriflavin inhibited growth of vegetative cells of Bacillus spp. without inhibiting recovery of injured Listeria cells. oPSU broth may allow consistent detection of low levels of injured and uninjured cells of L. monocytogenes in pasteurized foods containing various background microflora. PMID:11990038
NASA Astrophysics Data System (ADS)
Castillo, C.; Pérez, R.; Gómez, J. A.
2014-05-01
There is little information in scientific literature regarding the modifications induced by check dam systems in flow regimes within restored gully reaches, despite it being a crucial issue for the design of gully restoration measures. Here, we develop a conceptual model to classify flow regimes in straight rectangular channels for initial and dam-filling conditions as well as a method of estimating efficiency in order to provide design guidelines. The model integrates several previous mathematical approaches for assessing the main processes involved (hydraulic jump, impact flow, gradually varied flows). Ten main classifications of flow regimes were identified, producing similar results when compared with the IBER model. An interval for optimal energy dissipation (ODI) was observed when the steepness factor c was plotted against the design number (DN, ratio between the height and the product of slope and critical depth). The ODI was characterized by maximum energy dissipation and total influence conditions. Our findings support the hypothesis of a maximum flow resistance principle valid for a range of spacing rather than for a unique configuration. A value of c = 1 and DN ~ 100 was found to economically meet the ODI conditions throughout the different sedimentation stages of the structure. When our model was applied using the same parameters to the range typical of step-pool systems, the predicted results fell within a similar region to that observed in field experiments. The conceptual model helps to explain the spacing frequency distribution as well as the often-cited trend to lower c for increasing slopes in step-pool systems. This reinforces the hypothesis of a close link between stable configurations of step-pool units and man-made interventions through check dams.
NASA Technical Reports Server (NTRS)
Scott, Robert C.; Perry, Boyd, III; Pototzky, Anthony S.
1991-01-01
This paper describes and illustrates two matched-filter-theory based schemes for obtaining maximized and time-correlated gust-loads for a nonlinear airplane. The first scheme is computationally fast because it uses a simple one-dimensional search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multidimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.
NASA Astrophysics Data System (ADS)
Ding, Zhenyang; Du, Yang; Liu, Tiegen; Yao, X. Steve; Feng, Bowen; Liu, Kun; Jiang, Junfeng
2014-11-01
We present a long-range high spatial resolution optical frequency-domain reflectometry (OFDR) based on optimized deskew filter method. In proposed method, the frequency tuning nonlinear phase obtained from an auxiliary interferometer is used to compensate the nonlinear phase of the beating signals generated from a main OFDR interferometer using a deskew filter. The method can be applied for the entire spatial domain of the OFDR signals at once with a high computational efficiency. In addition, we apply the methods of higher orders of Taylor expansion and cepstrum analysis to improve the estimation accuracy of nonlinear phase. We experimentally achieve a measurement range of 80 km and a spatial resolution of 20 cm and 80 cm at distances of 10 km and 80 km that is about 187 times enhancement when compared with that of the same OFDR trace without nonlinearity compensation. The improved performance of the OFDR with the high spatial resolution, long measurement range and short process time will lead to practical applications in real-time monitoring and measurement of the optical fiber communication and sensing systems.
NASA Astrophysics Data System (ADS)
Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu
2015-08-01
Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.
Berset, Torfinn; Geng, Di; Romero, Iñaki
2012-01-01
Noise from motion artifacts is currently one of the main challenges in the field of ambulatory ECG recording. To address this problem, we propose the use of two different approaches. First, an adaptive filter with electrode-skin impedance as a reference signal is described. Secondly, a multi-channel ECG algorithm based on Independent Component Analysis is introduced. Both algorithms have been designed and further optimized for real-time work embedded in a dedicated Digital Signal Processor. We show that both algorithms improve the performance of a beat detection algorithm when applied in high noise conditions. In addition, an efficient way of choosing this methods is suggested with the aim of reduce the overall total system power consumption. PMID:23367417
Microstrip bandpass filters for Ultra-Wideband (UWB) wireless communications
Ching-Luh Hsu; Fu-Chieh Hsu; Jen-Tsai Kuo
2005-01-01
A new technique is developed for designing a composite microstrip bandpass filter (BPF) with a 3 dB fractional bandwidth of more than 100%. The BPF is suitable for ultra-wideband (UWB) wireless communications. The design utilizes embedding individually designed highpass structures and lowpass filters (LPF) into each other, followed by an optimization for tuning in-band performance. The stepped-impedance LPF is employed
Dynamic rule-ordering optimization for high-speed firewall filtering
Hazem Hamed; Ehab Al-shaer
2006-01-01
Packet flltering plays a critical role in many of the current high speed network technologies such as flrewalls and IPSec devices. The optimization of flrewall policies is critically important to provide high performance packet flltering par- ticularly for high speed network security. Current packet fll- tering techniques exploit the characteristics of the flltering policies, but they do not consider the
NASA Technical Reports Server (NTRS)
Freedman, A. P.; Steppe, J. A.
1995-01-01
The Jet Propulsion Laboratory Kalman Earth Orientation Filter (KEOF) uses several of the Earth rotation data sets available to generate optimally interpolated UT1 and LOD series to support spacecraft navigation. This paper compares use of various data sets within KEOF.
of navigating these seemingly endless streams of music appar- ently seems dubious with current technologiesOptimal filtering of dynamics in short-time features for music organization Jer´onimo Arenas interest in customizable methods for organizing music collections. Relevant music characteriza- tion can
Optimized selective lactate excitation with a refocused multiple-quantum filter
NASA Astrophysics Data System (ADS)
Holbach, Mirjam; Lambert, Jörg; Johst, Sören; Ladd, Mark E.; Suter, Dieter
2015-06-01
Selective detection of lactate signals in in vivo MR spectroscopy with spectral editing techniques is necessary in situations where strong lipid or signals from other molecules overlap the desired lactate resonance in the spectrum. Several pulse sequences have been proposed for this task. The double-quantum filter SSel-MQC provides very good lipid and water signal suppression in a single scan. As a major drawback, it suffers from significant signal loss due to incomplete refocussing in situations where long evolution periods are required. Here we present a refocused version of the SSel-MQC technique that uses only one additional refocussing pulse and regains the full refocused lactate signal at the end of the sequence.
Evaluation and optimization of magnetic filters on simulated boiler water. Final report
Iannicelli, J.; Webster, T.
1983-11-01
This project was conducted in order to investigate and quantify the role of the many variables involved in magnetic filtration of simulated boiler water containing particulate corrosion products. The work consisted of magnetic filtration of dilute synthetic suspensions of magnetite, hematite, goethite, and copper oxides under a range of operating parameters. Variables studied included filtration medium (matrix), flow rate, magnetic field strength, field orientation, and temperature. Ten different matrix materials were studied, including a variety of sizes of expanded metal and stainless steel balls, steel wool, cylinders, rings, and column packing materials. The filtration efficiency of these matrix types was compared and ranked. Expanded metal was selected as a perferred matrix. Two simple analytical methods were developed for the determination of very low oxide concentrations (low ppb). A unified understanding of magnetic filtration was achieved and a quantitative relationship was established between operating variables and filter performance. 15 references.
Nere, Nandkishor K; Allen, Kimberley C; Marek, James C; Bordawekar, Shailendra V
2012-10-01
Drying an early stage active pharmaceutical ingredient candidate required excessively long cycle times in a pilot plant agitated filter dryer. The key to faster drying is to ensure sufficient heat transfer and minimize mass transfer limitations. Designing the right mixing protocol is of utmost importance to achieve efficient heat transfer. To this order, a composite model was developed for the removal of bound solvent that incorporates models for heat transfer and desolvation kinetics. The proposed heat transfer model differs from previously reported models in two respects: it accounts for the effects of a gas gap between the vessel wall and solids on the overall heat transfer coefficient, and headspace pressure on the mean free path length of the inert gas and thereby on the heat transfer between the vessel wall and the first layer of solids. A computational methodology was developed incorporating the effects of mixing and headspace pressure to simulate the drying profile using a modified model framework within the Dynochem software. A dryer operational protocol was designed based on the desolvation kinetics, thermal stability studies of wet and dry cake, and the understanding gained through model simulations, resulting in a multifold reduction in drying time. PMID:22753308
[A new impulse noise filter based on pulse coupled neural network].
Ma, Yide; Shi, Fei; Li, Lian; An, Lizhe
2004-12-01
This paper presents a new impulse noise filter based on pulse coupled neural networks according to the apparent difference of gray value between noised pixels and the pixels around them. Comparing with the state-of-the-art denoised PCNN filter, the step by step modifying algorithm based on PCNN also, the new PCNN filter suggested in this paper costs less computation and less execution time. At the same time this new PCNN filter has been compared with other nonlinear filters, such as median filter, the stack filter based on omnidirectional structural elements constrains, the Omnidirectional morphology Open-Closing maximum filter (OOCmax) and the Omnidirectional morphology Close-Opening minimum (OCOmin) filter. The results of simulation shows that this algorithm is superior to standard median filter, the state-of-the-art PCNN filter, the maximal, minimal morphological filter with omnidirectional structuring elements, and the optimal stack filter based on omnidirectional structural elements constrains in the aspect of the impulse noise removal. What is more important is that this algorithm can keep the details of images more effectively. PMID:15646356
Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan
2015-01-01
To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585
Nonlinear Attitude Filtering Methods
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Crassidis, John L.; Cheng, Yang
2005-01-01
This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
Stack filter design: a structural approach
Lin Yin
1995-01-01
A new approach is developed for finding the optimal stack filter that minimizes noise subject to constraints on its structural behavior. Based on the output moments of stack filters, it is proven that the optimal stack filter is a combination of the median filter, which has the same window width as the stack filter, and a set of maximum and
NASA Astrophysics Data System (ADS)
Umeda, Toru; Tsuzuki, Shuichi; Boucher, Mikal; Dinh, Hung; Ma, L. C.; Boten, Russell
2006-03-01
Microbubble in filtering Tetra Methyl Ammonium Hydroxide (TMAH) were counted to find the filter which generates the lowest microbubble in resist development process. Hydrophilic Highly Asymmetric Poly Aryl Sulfone (HAPAS) filter was developed and tested. The result showed that generation of microbubbles was as low as that of the Nylon 6,6 filter which had the best performance to date. Microbubbles in TARC are counted using the same method as the developer testing described above except for mainstream flow rate and the counter model. The results show that counts in the small channel could be reduced by smaller pore size filter such as conventional 0.02um rated filter. However, counts in the larger channel could be reduced by larger pore size filter such as 0.1um rated filter. Based on the above results, 0.02um rated asymmetric nylon 6,6 filter was developed. As a result, 0.02um rated asymmetric Nylon 6,6 filter achieved relatively lower count at any channel as compared to the standard 0.04um rated Nylon 6,6 filter. Nylon 6,6 filters were installed in resist as an improvement for preventive maintenance (PM) at Wafertech, L.L.C. instead of the currently used filter which has more hydrophobic membrane material. Using the Nylon 6,6 membrane, the number of defects immediately after filter change greatly decreased from 493 pcs of the more hydrophobic filter to 6 pcs/wafer, then after purging with about 250ml, the number of defects reduced within the process specification while the more hydrophobic filter had required 2L purging and 12-36 hours of PM time.
Filter and method of fabricating
Janney, Mark A.
2006-02-14
A method of making a filter includes the steps of: providing a substrate having a porous surface; applying to the porous surface a coating of dry powder comprising particles to form a filter preform; and heating the filter preform to bind the substrate and the particles together to form a filter.
Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984
Sipkin, S.A.
1987-01-01
The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.
NASA Astrophysics Data System (ADS)
Tseng, Chien-Hsun
2015-02-01
The technique of multidimensional wave digital filtering (MDWDF) that builds on traveling wave formulation of lumped electrical elements, is successfully implemented on the study of dynamic responses of symmetrically laminated composite plate based on the first order shear deformation theory. The philosophy applied for the first time in this laminate mechanics relies on integration of certain principles involving modeling and simulation, circuit theory, and MD digital signal processing to provide a great variety of outstanding features. Especially benefited by the conservation of passivity gives rise to a nonlinear programming problem (NLP) for the issue of numerical stability of a MD discrete system. Adopting the augmented Lagrangian genetic algorithm, an effective optimization technique for rapidly achieving solution spaces of NLP models, numerical stability of the MDWDF network is well received at all time by the satisfaction of the Courant-Friedrichs-Levy stability criterion with the least restriction. In particular, optimum of the NLP has led to the optimality of the network in terms of effectively and accurately predicting the desired fundamental frequency, and thus to give an insight into the robustness of the network by looking at the distribution of system energies. To further explore the application of the optimum network, more numerical examples are engaged in efforts to achieve a qualitative understanding of the behavior of the laminar system. These are carried out by investigating various effects based on different stacking sequences, stiffness and span-to-thickness ratios, mode shapes and boundary conditions. Results are scrupulously validated by cross referencing with early published works, which show that the present method is in excellent agreement with other numerical and analytical methods.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob
2013-01-01
This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.
Igor G. Vladimirov
2015-06-25
This paper is concerned with the coherent quantum filtering (CQF) problem, where a quantum observer is cascaded in a measurement-free fashion with a linear quantum plant so as to minimize a mean square error of estimating the plant variables of interest. Both systems are governed by Markovian Hudson-Parthasarathy quantum stochastic differential equations driven by bosonic fields in vacuum state. These quantum dynamics are specified by the Hamiltonians and system-field coupling operators. We apply a recently proposed transverse Hamiltonian variational method to the development of first-order necessary conditions of optimality for the CQF problem in a larger class of observers. The latter is obtained by perturbing the Hamiltonian and system-field coupling operators of a linear coherent quantum observer along linear combinations of unitary Weyl operators, whose role here resembles that of the needle variations in the Pontryagin minimum principle. We show that if the observer is a stationary point of the performance functional in the class of linear observers, then it is also a stationary point with respect to the Weyl variations in the larger class of nonlinear observers.
Generalized particle flow for nonlinear filters
Fred Daum; Jim Huang
2010-01-01
We generalize the theory of particle flow to stabilize the nonlinear filter. We have invented a new nonlinear filter that is vastly superior to the classic particle filter and the extended Kalman filter (EKF). In particular, the computational complexity of the new filter is many orders of magnitude less than the classic particle filter with optimal estimation accuracy for problems
Buyel, Johannes F; Fischer, Rainer
2014-03-01
The extraction of biopharmaceutical proteins from intact leaves involves the release of abundant particulate contaminants that must be removed economically from the process stream before chromatography, for example, using disposable filters that comply with good manufacturing practice. We therefore scaled down an existing 200-kg process for the purification of two target proteins from tobacco leaves (the monoclonal antibody 2G12 and the fluorescent protein DsRed, as monitored by surface plasmon resonance spectroscopy and fluorescence imaging, respectively) and screened different materials on the 2-kg scale to reduce the number of depth filtration steps from three to one. We assessed filter cost and capacity, filtrate turbidity, and protein recovery when the filter materials were challenged with extracts from different tobacco varieties and related species grown in soil or rockwool. PDF4 was consistently the most suitable depth filter because it was the least expensive, it did not interact significantly with the target proteins, and it had the greatest overall capacity. The filter capacity was generally reduced when plants were grown in rockwool, but this substrate has a low bioburden, thus improving process safety. Our data concerning the clarification of plant extracts will help in the design of more cost-effective downstream processes and accelerate their development. PMID:24323869
NASA Astrophysics Data System (ADS)
Shu, Yao-Gen; Zhang, Xiao-Hu; Ou-Yang, Zhong-Can; Li, Ming
2012-01-01
The neck linker is widely believed to play a critical role in the hand-over-hand walking of conventional kinesin 1. Experiments have shown that change of the neck linker length will significantly change the stepping velocity of the motor. In this paper, we studied this length effect based on a highly simplified chemically powered ratchet model. In this model, we assume that the chemical steps (ATP hydrolysis, ADP and Pi release, ATP binding, neck linker docking) are fast enough under conditions far from equilibrium and the mechanical steps (detachment, diffusional search and re-attachment of the free head) are rate-limiting in kinesin walking. According to this model, and regarding the neck linker as a worm-like-chain polypeptide, we can calculate the steady state stepping velocity of the motor for different neck linker lengths. Our results show, under the actual values of binding energy between kinesin head and microtubule (˜15kBT) and the persistence length of neck linker (˜0.5 nm), that there is an optimal neck linker length (˜14-16 a.a.) corresponding to the maximal velocity, which implies that the length of the wild-type neck linker (˜15 a.a.) might be optimally designed for kinesin 1 to approach the largest stepping velocity.
Multilayer filter design with high K materials
NASA Astrophysics Data System (ADS)
Curtis, Nathaniel, II
A novel approach to filter design is presented. A high-K multilayer coupled line filter is designed for optimal performance within a dielectric resonator of rectangular cross section. The multilayer filter is shown to have a performance comparable to its planar counterpart as well as the Lange coupler while maintaining the design advantages that come with the multilayer approach to filter design such as increased flexibility in managing parameter constraints. The performance of the rectangular cross sectioned resonator in terms of modal response and resonant frequency has been evaluated through mathematical derivation and simulation. The reader will find the step by step process to designing the resonant structure as well as a MATLAB script that will graphically display the effect changing various parameters may have on resonator size to assist in the design analysis. The resonator has been designed to provide a finite package in terms of space and performance so that it may house the multilayer filter on a printed circuit board for ease of system implementation. The proposed design with analysis will prove useful for all multilayer coupled line filter types that may take advantage of the uniform environment provided by the finite packaging of the dielectric resonator. As with any microwave system, considerable effort must be put forth to maintain signal integrity throughout the delivery process from the signal input to reception at the output. As a result a large amount of effort and research has gone into answering the question of how to efficiently feed both a dielectric resonator filter of rectangular cross section as well as a coupled line filter that would be embedded within the resonators confines. Several methods for feeding have been explored and reported on. Of the feeding methods reported on the most feasible design includes a unique microstrip delivery to the embedded multilayer filter as pictured here.* *Please refer to dissertation for diagram.
Huang Yi; Wang De-hu; Wang Ji-tang; You Da-de; Wang Jian-ming
2010-01-01
The close-in anti-missile naval gun weapon system adopts closed-loop spotting to improve accuracy of firing. The study analyzes the process of closed-loop spotting, pointes out the importance of calculating the optimal miss parameter filtering length to get the maximum hit projectile in all route, comes up with the miss parameter model, establishes the model of calculating calibration corrections, simulates the
Surface micromachined optical low-cost all-air-gap filters based on stress-optimized Si3N4 layers
NASA Astrophysics Data System (ADS)
Irmer, S.; Alex, K.; Daleiden, J.; Kommallein, I.; Oliveira, M.; Römer, F.; Tarraf, A.; Hillmer, H.
2005-04-01
A new surface micromachining approach based on a multiple Si3N4- and silicon-layer stack is presented. The fabrication process is implemented by plasma-enhanced chemical vapour deposition of stress-optimized films, reactive ion etching using SF6/CHF3/Ar, wet chemical etching of the sacrificial silicon layers by KOH and critical point drying. Using this approach, the fabrication of an optical all-air-gap vertical-cavity Fabry-Pérot filter is demonstrated. The surface micromachined filter consists of two DBR mirrors, each having five 590 nm thick Si3N4 membranes separated by 390 nm wide air gaps. The distance between the mirrors (cavity) is 710 nm. The optical characterization and a white light interferometer measurement document the accuracy of the layer positioning and the performance of this low-cost approach. The filter shows the designed filter dip at 1490 nm, the full width at half maximum (FWHM) of the filter is 1.5 nm and the insertion loss is just 1.3 dB. The process is compatible with a variety of materials, e.g. III-V compounds, silicon, as well as organic materials, facilitating a huge application spectrum for sensors.
Step by Step Guide Enrolment Step by Step Guide
Mayer, Wolfgang
Enrolment Step by Step Guide #12;Enrolment Step by Step Guide Last updated: Tuesday, 25 March 2014 ..............................................................................................3 STEP 1: RECEIVE YOUR OFFER EMAIL OR LETTER.............................................................................4 STEP 2: RECEIVE YOUR UNISA WELCOME EMAILS OR LETTER
NASA Astrophysics Data System (ADS)
Khizar, Muhammad; Ehsan, Md.; Govani, Jayesh; Mei, Dongming
2013-03-01
In this paper, the performance optimization of c-Si1-xGex/Si heterostructure thin film solar cells along with the effect of step-graded absorber layer is discussed by modeling and simulation. Different cells with 1, 3, 5 and 7 ?m thick step-graded layers of p-type c-Si1-xGex on top of 20 ?m p-Si buffer layer are simulated. A comparative study of the thin film solar cell structures with and without a step-graded absorption layer is also performed. Some of the key characteristics such as short-circuit current density (Jsc), open circuit voltage (VOC) , and fill factor (FF) are calculated for varying concentration of Germanium (Ge) in c-Si1-xGex graded layer. With the optimized Ge concentration in the step-graded layer, significant enhancement in the overall efficiency of the solar cells has been calculated. The effect of thickness variation of alloyed layer for varying Ge composition ~ 0.1--10% has also been carried out. Finally, the cell performance is calculated on the bases of current density-voltage characteristics curves and external quantum efficiency. We found that the optimized graded cell structure with larger Ge fractions was responsible for a higher magnitude and smaller thickness dependence of the short circuit current density. This is attributed due to the larger absorption coefficient that increases optical carrier generation in the near surface region for larger Ge contents. Further studies for the band-gap engineering of this step-graded absorber layer is still being performed.
Friedmann, Roland
2009-03-05
with peak powers of 0.5 W/cm² were obtained with cathode catalyst layer with a composition of Nafion®:Teflon®:C of 1.375:0.375:1 and 0.875:0.875:1, respectively. A comparison study of a two-step and one-step prepared catalyst was also done to characterize...
NASA Astrophysics Data System (ADS)
Eichhorn, T. R.; Niketic, N.; van den Brandt, B.; Filges, U.; Panzner, T.; Rantsiou, E.; Wenckebach, W. Th.; Hautle, P.
2014-08-01
The use of polarized protons as neutron spin filter is an attractive alternative to the well established neutron polarization techniques, as the large, spin-dependent neutron scattering cross-section for protons is useful up to the sub-MeV region. Employing optically excited triplet states for the dynamic nuclear polarization (DNP) of the protons relieves the stringent requirements of classical DNP schemes, i.e low temperatures and strong magnetic fields, making technically simpler systems with open geometries possible. Using triplet DNP a record polarization of 71% has been achieved in a pentacene doped naphthalene single crystal at a field of 0.36 T using a simple helium flow cryostat for cooling. Furthermore, by placing the polarized crystal in a neutron optics focus and de-focus scheme, the actual sample cross-section could be increased by a factor 35 corresponding to an effective spin filter cross-section of 18×18 mm2.
Kalb, J.
1992-05-01
This paper describes the design of an inverse adaptive filter, using the Least-Mean-Square (LMS) algorithm, the correct data taken with an analog filter. The gradient estimate used in the LMS algorithm is based upon the instantaneous error, e{sup 2}(n). Minimizing the mean-squared-error does not provide an optimal solution in this specific case. Therefore, another performance criterion, error power, was developed to calculate the optimal inverse model. Despite using a different performance criterion, the inverse filter converges rapidly and gives a small mean-squared-error. Computer simulations of this filter are also shown in this paper.
Clauss, Marcus; Schulz, Jochen; Stratmann-Selke, Janin; Decius, Maja; Hartung, Jörg
2013-01-01
"Livestock-associated" Methicillin-resistent Staphylococcus aureus (LA-MRSA) are frequently found in the air of piggeries, are emitted into the ambient air of the piggeries and may also drift into residential areas or surrounding animal husbandries.. In order to reduce emissions from animal houses such as odour, gases and dust different biological air cleaning systems are commercially available. In this study the retention efficiencies for the culturable LA-MRSA of a bio-trickling filter and a combined three step system, both installed at two different piggeries, were investigated. Raw gas concentrations for LA-MRSA of 2.1 x 10(2) cfu/m3 (biotrickling filter) and 3.9 x 10(2) cfu/m3 (three step system) were found. The clean gas concentrations were in each case approximately one power of ten lower. Both systems were able to reduce the number of investigated bacteria in the air of piggeries on average about 90%. The investigated systems can contribute to protect nearby residents. However, considerable fluctuations of the emissions can occur. PMID:23540196
NASA Astrophysics Data System (ADS)
Cheeran, Alice N.; Pandey, Prem C.; Jangamashetti, Dakshayani S.
2002-05-01
In a previous investigation [P. C. Pandey et al., J. Acoust. Soc. Am. 110, 2705 (2001)], a scheme using binaural dichotic presentation was devised for simultaneously reducing the effect of increased temporal and spectral masking in bilateral sensorineural hearing impairment. Speech was processed by a pair of time-varying comb filters with passbands corresponding to cyclically swept auditory critical bands, with the objective that spectral components in neighboring critical bands do not mask each other and sweeping of filter passbands provides relaxation time to the sensory cells on the basilar membrane. Presently investigation is carried out to find the optimal value of the sweep cycle. Comb filters used were 256-coefficient linear phase filters, with transition crossovers adjusted for low perceived spectral distortion, 1 dB passband ripple, 30 dB stopband attenuation, and 78-117 Hz transition width. Acoustic stimuli consisted of swept sine wave and running speech from a male and a female speaker. Bilateral loss was simulated by adding broadband noise with constant short-time SNR. Listening tests with stimuli processed using sweep cycles of 10, 20, 40, 50, 60, 80, 100 ms indicated highest perceptual quality ranking for sweep cycle in the 40-60 ms range, with a peak at 50 ms.
NASA Astrophysics Data System (ADS)
Li, Zhuo; Chen, Geng-Hua; Zhang, Li-Hua; Yang, Qian-Sheng; Feng, Ji
2006-02-01
We present a new least-mean-square algorithm of adaptive filtering to improve the signal to noise ratio for magnetocardiography data collected with high-temperature SQUID-based magnetometers. By frequently adjusting the adaptive parameter ? to systematic optimum values in the course of the programmed procedure, the convergence is accelerated with a highest speed and the minimum steady-state error is obtained simultaneously. This algorithm may be applied to eliminate other non-steady relevant noises as well.
NASA Astrophysics Data System (ADS)
Omelyan, Igor; Kovalenko, Andriy
2013-12-01
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics steered by effective solvation forces allows huge outer time steps up to tens of picoseconds without affecting the equilibrium and conformational properties, and thus provides a 100- to 500-fold effective speedup in comparison to conventional MD with explicit solvent. With the statistical-mechanical 3D-RISM-KH account for effective solvation forces, the method provides efficient sampling of biomolecular processes with slow and/or rare solvation events such as conformational transitions of hydrated alanine dipeptide with the mean life times ranging from 30 ps up to 10 ns for "flip-flop" conformations, and is particularly beneficial for biomolecular systems with exchange and localization of solvent and ions, ligand binding, and molecular recognition.
A Kalman filter for a two-dimensional shallow-water model
NASA Technical Reports Server (NTRS)
Parrish, D. F.; Cohn, S. E.
1985-01-01
A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.
E. DaPra; K. Schneider; R. Bachofen
1989-01-01
Summary An anaerobic filter system with a volume of 11 l fed with wastewater from the Swiss sugar refinery in Frauenfeld was established on a laboratory scale. It provided a filter performance of over 8 kg COD·m?3·d?1 with an efficiency of at least 70%. A 600–1 pilot plant system in the factory gave a degradation efficiency of 70% when fed
Sharma, M; Todor, D; Fields, E
2014-06-01
Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 4–5 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with “excellent” CTV coverage, average D90 of 96% (range 91–102), sigmoid D2cc was reduced on average by 37% (range 16–73), bladder by 28% (range 20–47) and rectum by 27% (range 15–45). Similar reductions were obtained on plans with “good” coverage, with an average D90 of 93% (range 90–99). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 4–5 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.
Chih-Yuan Hsu; Zhen-Ming Pan; Rei-Hsing Hu; Chih-Chun Chang; Hsiao-Chun Cheng; Che Lin; Bor-Sen Chen
2015-01-01
In this study, robust biological filters with an external control to match a desired input/output (I/O) filtering response are engineered based on the well-characterized promoter-RBS libraries and a cascade gene circuit topology. In the field of synthetic biology, the biological filter system serves as a powerful detector or sensor to sense different molecular signals and produces a specific output response only if the concentration of the input molecular signal is higher or lower than a specified threshold. The proposed systematic design method of robust biological filters is summarized into three steps. Firstly, several well-characterized promoter-RBS libraries are established for biological filter design by identifying and collecting the quantitative and qualitative characteristics of their promoter-RBS components via nonlinear parameter estimation method. Then, the topology of synthetic biological filter is decomposed into three cascade gene regulatory modules, and an appropriate promoter-RBS library is selected for each module to achieve the desired I/O specification of a biological filter. Finally, based on the proposed systematic method, a robust externally tunable biological filter is engineered by searching the promoter-RBS component libraries and a control inducer concentration library to achieve the optimal reference match for the specified I/O filtering response. PMID:26357282
Kao, Jim [Los Alamos National Laboratory, Applied Physics Division, P.O. Box 1663, MS T086, Los Alamos, NM 87545 (United States)]. E-mail: kao@lanl.gov; Flicker, Dawn [Los Alamos National Laboratory, Applied Physics Division, P.O. Box 1663, MS T086, Los Alamos, NM 87545 (United States); Ide, Kayo [University of California at Los Angeles (United States); Ghil, Michael [University of California at Los Angeles (United States)
2006-05-20
This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from a single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.
NASA Astrophysics Data System (ADS)
Sarca, Octavian V.; Dougherty, Edward R.; Astola, Jaakko T.
1999-07-01
Filter design involves a trade-off between the size of the filter class over which optimization is to be performed and the size of the training sample. As the number of parameters determining the filter class grows, so to does the size of the training sample required to obtain a given degree of precision when estimating the optimal filter from the sample data. A common way to moderate the estimation problem is to use a constrained filter requiring less parameters, but then a tradeoff between the theoretical filter performance and the estimation precision arises. The overall result strongly depends on the constraint type. Approaches presented in this paper divide the filter operation into two stages and apply constraints only to the first stage. Such filters are advantageous since they are fully optimal with respect to certain subsets of the filter window. Error expression, representation and design methodology are discussed. A generic optimization algorithm for such two-stage filters is proposed. Special attention is paid to three particular cases, for which properties, design algorithms and experimental results are provided: two-stage filters with linearly separable preprocessing, two-stage filters with restricted window preprocessing, and two-stage iterative filters.
ERIC Educational Resources Information Center
Sikora, Stephanie
2006-01-01
The Optimal Aging Program (OAP) at the University of Arizona, College of Medicine is a longitudinal mentoring program that pairs students with older adults who are considered to be aging "successfully." This credit-bearing elective was initially established in 2001 through a grant from the John A. Hartford Foundation, and aims to expand the…
Stack filters and neural networks
E. J. Coyle
1989-01-01
The stack filter approach, which provides a unique interpretation of the function of each neuron in the network when the goal is to minimize the mean absolute error, is described. Stack filters also provide information on when soft decisions, or sigmoid functions, are necessary for the neural network to attain optimality. The associative memory behavior exhibited by some stack filters
Bak, Claus Leth
is the LCL filter, which became well accepted and widely used as an in- terface between renewable energy distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us at vbn@aub.aau.dk providing details, and we will remove
Optimum edge detection filter.
Dickey, F M; Shanmugam, K S
1977-01-01
Edge detection and enhancement are required in a number of important image processing applications. In this paper we consider the problem of optimizing spatial frequency domain filters for detecting a class of edges in images. The filter is optimum in that it produces maximum energy in the vicinity of the location of the edge for a given spatial resolution I and the bandwidth ?. We show that the filter transfer function can be specified in terms of the prolate spheroidal wavefunctions for a given space-bandwidth product I?. Further we show that for values of I? less than 2, the optimal filter represents the Laplacian operator in image space followed by a low pass filter with a cutoff frequency ?. PMID:20168442
Shyy, Wei
Proposal Management Reviewer Suspend Approval Step by Step Last updated: 11/11/08 1 of 4 http Management step by step document. Reviewer Home Workspace Approved tab 1. Click the Approved tab in your Home 2 #12;Proposal Management Reviewer Suspend Approval Step by Step Last updated: 11/11/08 2 of 4 http
Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang
2015-01-01
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742
Chen, Wu; Jiang, Kunqiang; Mack, Anne; Sachok, Bo; Zhu, Xin; Barber, William E; Wang, Xiaoli
2015-10-01
Superficially porous particles (SPPs) with pore size ranging from 90Å to 120Å have been a great success for the fast separation of small molecules over totally porous particles in recent years. However, for the separation of large biomolecules such as proteins, particles with large pore size (e.g. ? 300Å) are needed to allow unrestricted diffusion inside the pores. One early example is the commercial wide pore (300Å) SPPs in 5?m size introduced in 2001. More recently, wide pore SPPs (200Å and 400Å) in smaller particle sizes (3.5-3.6?m) have been developed to meet the need of increasing interest in doing faster analysis of larger therapeutic molecules by biopharmaceutical companies. Those SSPs in the market are mostly synthesized by the laborious layer-by-layer (LBL) method. A one step coating approach would be highly advantageous, offering potential benefits on process time, easier quality control, materials cost, and process simplicity for facile scale-up. A unique one-step coating process for the synthesis of SPPs called the "coacervation method" was developed by Chen and Wei as an improved and optimized process, and has been successfully applied to synthesis of a commercial product, Poroshell 120 particles, for small molecule separation. In this report, we would like to report on the most recent development of the one step coating coacervation method for the synthesis of a series of wide pore SPPs of different particle size, pore size, and shell thickness. The one step coating coacervation method was proven to be a universal method to synthesize SPPs of any particle size and pore size. The effects of pore size (300Å vs. 450Å), shell thickness (0.25?m vs. 0.50?m), and particle size (2.7?m and 3.5?m) on the separation of large proteins, intact and fragmented monoclonal antibodies (mAbs) were studied. Van Deemter studies using proteins were also conducted to compare the mass transfer properties of these particles. It was found that the larger pore size actually had more impact on the performance of mAbs than particle size and shell thickness. The SPPs with larger 3.5?m particle size and larger 450Å pore size showed the best resolution of mAbs and the lowest back pressure. To the best of our knowledge, this is the largest pore size made on SPPs. These results led to the optimal particle design with a particle size of 3.5?m, a thin shell of 0.25?m and a larger pore size of 450Å. PMID:26342871
Design of Weighted Order Statistic Filters Using the Perceptron Algorithm
Lee, Yong Hoon
of WOS filters encompasses WM,median and rank order filters[4], it is a subclass of stack filters[5]. In [6] and [7],it is shown that an optimal stack filter can be designed under the mean abso- lute error(MAE) criterion by using linear program- ming(LP). Although the WOS filter is a special case of stack filters
dos Santos, Raquel; Rosa, Sara A S L; Aires-Barros, M Raquel; Tover, Andres; Azevedo, Ana M
2014-08-15
In this work, phenylboronic acid (PBA) was thoroughly investigated as a synthetic ligand for the purification of an immunoglobulin G (IgG) from a clarified cell supernatant from Chinese Hamster Ovary (CHO) cell cultures. In particular, the study was focused on the development of a washing step and in the optimization of the elution step using a serum containing supernatant. From the different conditions tested, best recoveries - 99% - and purifications - protein purity of 81% and a purification factor of 16 out of a maximum of 20 - were achieved using 100mM d-sorbitol in 10mM Tris-HCl as washing buffer and 0.5M d-sorbitol with 150mM NaCl in 10mM Tris-HCl as elution buffer. The purification outcome was also compared with protein A chromatography that revealed a recovery of 99%, 87% protein purity and 29 out of a maximum of 33 purification factor. Following the main purification, purified IgG was characterized in terms of isoelectric point, size and activity. In the end, a proof of concept was performed using two different mAbs from serum-free CHO cell cultures. PMID:24947887
Sironi, Amos; Tekin, Bugra; Rigamonti, Roberto; Lepetit, Vincent; Fua, Pascal
2015-01-01
Learning filters to produce sparse image representations in terms of over-complete dictionaries has emerged as a powerful way to create image features for many different purposes. Unfortunately, these filters are usually both numerous and non-separable, making their use computationally expensive. In this paper, we show that such filters can be computed as linear combinations of a smaller number of separable ones, thus greatly reducing the computational complexity at no cost in terms of performance. This makes filter learning approaches practical even for large images or 3D volumes, and we show that we significantly outperform state-of-the-art methods on the curvilinear structure extraction task, in terms of both accuracy and speed. Moreover, our approach is general and can be used on generic convolutional filter banks to reduce the complexity of the feature extraction step. PMID:26353211
Wood, Claire; Bremner, Brenda
2013-08-09
The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.
NASA Astrophysics Data System (ADS)
Vergöhl, Michael; Pflug, Andreas; Rademacher, Daniel
2012-09-01
The optimization of the uniformity of high precision optical filters is often a critical and time consuming procedure. The goal of the present paper is to evaluate critical factors that influence the thickness distribution on substrates during a magnetron sputter process. A new developed sputter coater “EOSS” was used to deposit SiO2 and Nb2O5 single films and optical filters. It is based on dynamic deposition using a rotating turntable. Two sets of cylindrical double magnetrons are used for the low and the high index layers, respectively. In contrast to common planar magnetrons, the use of cylindrical magnetrons should yield a more stable distribution during the lifetime of the target. The thickness distribution on the substrates was measured by optical methods. Homogenization is carried out by shaping apertures. The distribution of the particle flow from the cylindrical magnetron was simulated using particle-in-cell Monte Carlo plasma simulation developed at Fraunhofer IST. Thickness profiles of the low index and the high index layers are calculated by numerical simulation and will be compared with the experimental data. Experimental factors such as wobbling of the magnetron during rotation, geometrical changes of critical components of the coater such as uniformity shapers as well as gas flow variations will be evaluated and discussed.
A generalized adaptive mathematical morphological filter for LIDAR data
NASA Astrophysics Data System (ADS)
Cui, Zheng
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
Theoretical framework for filtered back projection in tomosynthesis
NASA Astrophysics Data System (ADS)
Lauritsch, Guenter; Haerer, Wolfgang H.
1998-06-01
Tomosynthesis provides only incomplete 3D-data of the imaged object. Therefore it is important for reconstruction tasks to take all available information carefully into account. We are focusing on geometrical aspects of the scan process which can be incorporated into reconstruction algorithms by filtered backprojection methods. Our goal is a systematic approach to filter design. A unified theory of tomosynthesis is derived in the context of linear system theory, and a general four-step filter design concept is presented. Since the effects of filtering are understandable in this context, a methodical formulation of filter functions is possible in order to optimize image quality regarding the specific requirements of any application. By variation of filter parameters the slice thickness and the spatial resolution can easily be adjusted. The proposed general concept of filter design is exemplarily discussed for circular scanning but is valid for any specific scan geometry. The inherent limitations of tomosynthesis are pointed out and strategies for reducing the effects of incomplete sampling are developed. Results of a dental application show a striking improvement in image quality.
Rinnan, Asmund; Christensen, Niels Johan; Engelsen, Søren Balling
2010-01-01
The quantitative influence of the choice of energy evaluation method used in the geometry optimization step prior to the calculation of molecular descriptors in QSAR and QSPR models was investigated. A total of 11 energy evaluation methods on three molecular datasets (toxicological compounds, aromatic compounds and PPARgamma agonists) were studied. The methods employed were: MMFF94 s, MM3* with epsilon(r) (relative dielectric constant) = 1, MM3* with epsilon(r) = 80, AM1, PM3, HF/STO-3G, HF/6-31G, HF/6-31G(d,p), B3LYP/STO-3G, B3LYP/6-31G, and B3LYP/6-31G(d,p). The 3D-descriptors used in the QSAR/QSPR models were calculated with commercially available molecular descriptor programs primarily directed toward pharmaceutical research. In order to evaluate the uncertainties involved in the QSAR/QSPR predictions bootstrapping was used to validate all models using 1,000 drawings for each data set. The scale free error-term, q(2), was used to compare the relative quality of the models resulting from different optimization methods on the same set of molecules. Depending on the dataset, the average 0.632 bootstrap estimated q(2) varies from 0.55 to 0.57 for the toxicological compounds, from 0.58 to 0.62 for the aromatic compounds, and from 0.69 to 0.75 for the PPARgamma agonists. The B3LYP/6-31G(d,p) provided the best overall results, albeit the increase in q(2) was small in all cases. The results clearly indicate that the choice of the energy evaluation method has very limited impact. This study suggests that QSAR or QSPR studies might benefit from the choice of a rapid optimization method with little or no loss in model accuracy. PMID:19943083
Factoring wavelet transforms into lifting steps
Ingrid Daubechies; Wim Sweldens
1998-01-01
This article is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with\\u000a finite filters can be decomposed into a finite sequence of simple filtering steps, which we call lifting steps but that are\\u000a also known as ladder structures. This decomposition corresponds to a factorization of the polyphase matrix of the wavelet\\u000a or
Lu, Wu-Sheng
][8]. In addition, error feed-forward helps further reduce the RN [6]. Alternatively, the RN reduction problem has transformation, the other uses error feedback/feed-forward of state variables. In this paper, we propose a method for the joint optimization of error feedback/feed-forward and state-space realization. It is shown
The "Blob" Filter: Gaussian Mixture Nonlinear Filtering with Re-Sampling for Mixand Narrowing
Psiaki, Mark L.
The "Blob" Filter: Gaussian Mixture Nonlinear Filtering with Re-Sampling for Mixand Narrowing Mark-7501 Abstract--A new Gaussian mixture filter has been developed, one that uses a re-sampling step in order to limit the covariances of its individual Gaussian components. The new filter has been designed to produce
NASA Astrophysics Data System (ADS)
Nogues, J. P.; Nordbotten, J. M.; Celia, M. A.
2013-05-01
One option for monitoring CO2 injection is through pressure measurements made in formations overlying the injection formation. If pressure perturbations due to leakage can be separated from natural background variability, then this can be a viable technology to monitor for CO2 or brine leakage. Two key questions are how many monitoring wells are needed to detect a leakage event, and where those wells should be placed. In this study we present a methodology that uses a combination of a Kalman filter algorithm, a physically based analytical model that solves for pressure propagation across old/abandoned leaky wells in a multi-formation system, and a multi-objective genetic algorithm, to answer these two questions. The Kalman filter is used to explore the covariance reduction based on possible well positions. The physically based model is used to simulate, in a Monte Carlo scheme, a wide range of possible leakage scenarios where the main unknown is the permeability of the old/abandoned leaky wells. The multi-objective genetic algorithm is the Non-dominated Sorting Genetic Algorithm (NSGA-II). The models are combined to address the following three objectives: (1) The minimization of the total variance of the pressure field, (2) the minimization of the number of wells needed to detect a leakage event, and (3) the identification and subsequent elimination of detected leakage events that are considered to be "not harmful, where "harmful" refers to an event in which the pressure change in the monitored formation is not large enough to induce leakage into the deepest potable water aquifer. The methodology is applied to a synthetic case study, which serves to prove the applicability of the methods and to gather insights on the strengths and weaknesses of using pressure monitoring wells to detect a CO2 leakage event.
Bergman, Werner (Pleasanton, CA)
1986-01-01
An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.
Bergman, W.
1985-01-09
An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.
Davison, James A
2015-01-01
Purpose To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. Patients and methods A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Results Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. Conclusion A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer.
Rami?, Milica; Vidovi?, Senka; Zekovi?, Zoran; Vladi?, Jelena; Cvejin, Aleksandra; Pavli?, Branimir
2015-03-01
Aronia melanocarpa by-product from filter-tea factory was used for the preparation of extracts with high content of bioactive compounds. Extraction process was accelerated using sonication. Three level, three variable face-centered cubic experimental design (FCD) with response surface methodology (RSM) was used for optimization of extraction in terms of maximized yields for total phenolics (TP), flavonoids (TF), anthocyanins (MA) and proanthocyanidins (TPA) contents. Ultrasonic power (X?: 72-216 W), temperature (X?: 30-70 °C) and extraction time (X?: 30-90 min) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions for investigated responses. Three-dimensional surface plots were generated from the mathematical models. The optimal conditions for ultrasound-assisted extraction of TP, TF, MA and TPA were: X?=206.64 W, X?=70 °C, X?=80.1 min; X?=210.24 W, X?=70 °C, X?=75 min; X?=216 W, X?=70 °C, X?=45.6 min and X?=199.44 W, X?=70 °C, X?=89.7 min, respectively. Generated model predicted values of the TP, TF, MA and TPA to be 15.41 mg GAE/ml, 9.86 mg CE/ml, 2.26 mg C3G/ml and 20.67 mg CE/ml, respectively. Experimental validation was performed and close agreement between experimental and predicted values was found (within 95% confidence interval). PMID:25454824
NSDL National Science Digital Library
Brieske, Joel A.
2003-01-01
The first site related to water filtration is from the US Environmental Agency entitled EPA Environmental Education: Water Filtration (1 ). The two-page document explains the need for water filtration and the steps water treatment plants take to purify water. To further understand the process, a demonstration project is provided that illustrates these purification steps, which include coagulation, sedimentation, filtration, and disinfection. The second site is an interesting Flash animation called Filtration: How Does it Work (2 ) provided by Canada's Prairie Farm Rehabilitation Administration. Visitors will learn various types of filtration procedures and systems and the materials that are used such as carbon and sand. Next, from the National Science Foundation is a learning activity called Get Out the Gunk (3 ). Using just a few simple items from around the house, kids will be able to answer questions like "Does a filter work better with a lot of water rushing through, or a small trickle?" and "Does it make the water cleaner if you pour it through a filter twice?" The fourth Web site, Rapid Sand Filtration (4 ), is provided by Dottie Schmitt and Christie Shinault of Virginia Tech. The authors describe the process, which involves the flow of water through a bed of granular media, normally following settling basins in conventional water treatment trains to remove any particulate matter left over after flocculation and settling. Along with its thorough description, readers can view illustrations and photographs that further explain the process. The Vegetative Buffer Strips for Improved Surface Water Quality (5) Web site is provided by the Iowa State University Extension office. The document explains what vegetative buffer strips are, how they filter contaminants and sediment from surface water, how effective they are, and more. The sixth offering is a file called Infiltration Basins and Trenches (6) that is offered by the University of Wisconsin Extension. These structures are intended to collect water, have it infiltrate into the ground, and have it purified along the way. This document explains how effective they are at removing pollutants, how to install them, design guidelines, maintenance, and more. Next, from a site called Wilderness Survial.net is the Water Filtration Devices (7) page. Visitors read how to make a filtering system out of cloth, sand, crushed rock, charcoal, or a hollow log, although as is stated, the water still has to be purified. The last site, from the US Geological Survey, is called A Visit to a Wastewater-Treatment Plant: Primary Treatment of Wastewater (8). Although geared towards children, the site does a good job of explaining what happens at each stage of the treatment process and how pollutants are removed to help keep water clean. Everything from screening, pumping, aerating, sludge and scum removal, killing bacteria, and what is done with wastewater residuals is covered.
Wu, Xiao; Zhu, Jun; Cheng, Jiehong; Zhu, Nanwen
2015-03-01
In this study, the effect of three operating parameters, i.e., the first/second volumetric feeding ratio (milliliters/milliliters), the first anaerobic/aerobic (an/oxic) time ratio (minute/minute), and the second an/oxic time ratio (minute/minute), on the performance of a two-step fed sequencing batch reactor (SBR) system to treat swine wastewater for nutrients removal was examined. Central Composite Design, coupled with Response Surface Methodology, was employed to test these parameters at five levels in order to optimize the SBR to achieve the best removal efficiencies for six response variables including total nitrogen (TN), ammonium nitrogen (NH4-N), total phosphorus (TP), dissolved phosphorus (DP), chemical oxygen demand (COD), and biochemical oxygen demand (BOD). The results showed that the three parameters investigated had significant impact on all the response variables (TN, NH4-N, TP, DP, COD, and BOD), although the highest removal efficiency for each individual responses was associated with different combination of the three parameters. The maximum TN, NH4-N, TP, DP, COD, and BOD removal efficiencies of 96.38%, 95.38%, 93.62%, 94.3%, 95.26%, and 92.84% were obtained at the optimal first/second volumetric feeding ratio, first an/oxic time ratio, and second an/oxic time ratio of 3.23, 0.4, and 0.8 for TN; 2.64, 0.72, and 0.76 for NH4-N; 3.08, 1.16, and 1.07 for TP; 1.32, 0.81, and 1.0 for DP; 2.57, 0.96, and 1.12 for COD; and 1.62, 0.64, and 1.61 for BOD, respectively. Good linear relationships between the predicted and observed results for all the response variables were observed. PMID:25564205
Hot-gas filter manufacturing assessments: Volume 5. Final report, April 15, 1997
Boss, D.E.
1997-12-31
The development of advanced filtration media for advanced fossil-fueled power generating systems is a critical step in meeting the performance and emissions requirements for these systems. While porous metal and ceramic candle-filters have been available for some time, the next generation of filters will include ceramic-matrix composites (CMCs), intermetallic alloys, and alternate filter geometries. The goal of this effort was to perform a cursory review of the manufacturing processes used by 5 companies developing advanced filters from the perspective of process repeatability and the ability for their processes to be scale-up to production volumes. It was found that all of the filter manufacturers had a solid understanding of the product development path. Given that these filters are largely developmental, significant additional work is necessary to understand the process-performance relationships and projecting manufacturing costs. While each organization had specific needs, some common among all of the filter manufacturers were access to performance testing of the filters to aide process/product development, a better understanding of the stresses the filters will see in service for use in structural design of the components, and a strong process sensitivity study to allow optimization of processing.
NASA Technical Reports Server (NTRS)
1993-01-01
The Aquaspace H2OME Guardian Water Filter, available through Western Water International, Inc., reduces lead in water supplies. The filter is mounted on the faucet and the filter cartridge is placed in the "dead space" between sink and wall. This filter is one of several new filtration devices using the Aquaspace compound filter media, which combines company developed and NASA technology. Aquaspace filters are used in industrial, commercial, residential, and recreational environments as well as by developing nations where water is highly contaminated.
Zhu, Yanqiu; Liu, Yanfeng; Li, Jianghua; Shin, Hyun-dong; Du, Guocheng; Liu, Long; Chen, Jian
2015-02-01
In our previous work, a recombinant Bacillus subtilis strain for the microbial production of N-acetylglucosamine (GlcNAc) was constructed through modular pathway engineering. In this study, to enhance GlcNAc production, glucose feeding approaches and dissolved oxygen (DO) control methods in fed-batch culture were systematically investigated. We first studied the effects of different glucose feeding strategies, including exponential fed-batch culture, pulse fed-batch culture, constant rate fed-batch culture, and glucose control (5 g/L, 10 g/L, 15 g/L) fed-batch culture, on cell growth and GlcNAc synthesis. We found that GlcNAc production in glucose control (5 g/L) fed-batch culture reached 26.58 g/L, which was 3.10 times that in batch culture. Next, the effect of DO level (20%, 30%, 40%, and 50%) on GlcNAc production was investigated, and a step-wise DO control strategy (0-7 h, 30%; 7-15 h, 50%; 15-50 h, 40%; 50-72 h, 30%) was introduced. With the optimal glucose and DO control strategy, GlcNAc production reached 35.77 g/L, which was 4.17 times the production in batch culture without DO control. PMID:25499147
Labyrinth stepped seal geometric optimization
Wernig, Marcus Daniel
1995-01-01
High-speed rotating machinery poses a challenging problem to designers and engineers. Interference between rotating and stationary elements can result in excessive wear, decreased machine performance, or machine failure. Labyrinth seals present a...
The use of sample selection probabilities for stack filter design
B. Shmulevich; Vladimir Melnik; Karen Egiazarian
2000-01-01
We propose a procedure for stack filter design that takes into consideration the filter's sample selection probabilities. A statistical optimization of stack filters can result in a class of stack filters, all of which are statistically equivalent. Such a situation arises in cases of nonsymmetric noise distributions or in the presence of constraints. Among the set of equivalent stack filters,
Median type filters and perceptrons
Lin Yin; Jaakko Astola; Yrjo Neuvo
1991-01-01
Threshold composition shows that any multilayer perceptron with positive weights in the binary domain corresponds to a multistage weighted order statistic (MWOS) filter in the real domain. Two adaptive MWOS filtering algorithms, the constrained least mean absolute back-propagation (CLMA-BP) algorithm and the constrained least mean square back-propagation (CLMS-BP) algorithm, are derived for finding the optimal MWOS filters under the mean
NASA Astrophysics Data System (ADS)
Colburn, Christopher; Bewley, Thomas
2010-11-01
The Kalman Filter (KF) is celebrated as the optimal estimator for systems with linear dynamics and gaussian uncertainty. Although most systems of interest do not have linear dynamics and are not forced by gaussian noise, the KF is used ubiquitously within industry. Thus, we present a novel estimation algorithm, the Game-theoretic Kalman Filter (GKF), which intelligently hedges between competing sequential filters and does not require the assumption of gaussian statistics to provide a "best" estimate.
ERIC Educational Resources Information Center
Klemetson, S. L.
1978-01-01
Presents the 1978 literature review of wastewater treatment. The review is concerned with biological filters, and it covers: (1) trickling filters; (2) rotating biological contractors; and (3) miscellaneous reactors. A list of 14 references is also presented. (HM)
NASA Technical Reports Server (NTRS)
1985-01-01
Filtration technology originated in a mid 1960's NASA study. The results were distributed to the filter industry, an HR Textron responded, using the study as a departure for the development of 421 Filter Media. The HR system is composed of ultrafine steel fibers metallurgically bonded and compressed so that the pore structure is locked in place. The filters are used to filter polyesters, plastics, to remove hydrocarbon streams, etc. Several major companies use the product in chemical applications, pollution control, etc.
Generalized linear correlation filters
NASA Astrophysics Data System (ADS)
Rodriguez, Andres; Vijaya Kumar, B. V. K.
2013-05-01
We present two generalized linear correlation filters (CFs) that encompass most of the state-of-the-art linear CFs. The common criteria that arc used in linear CF design are the mean squared error (MSE), output noise variance (ONV), and average similarity measure (ASM). We present a simple formulation that uses an optimal tradeoff among these criteria both constraining and not constraining the correlation peak value, and refer to them as generalized Constrained Correlation Filter (CCF) and Unconstrained Couelation Filter (UCF). We show that most state-of-the-art linear CFs arc subsets of these filters. We present a technique for efficient UCF computation. We also introduce the modified CCF (mCCF) that chooses a unique correlation peak value for each training image, and show that mCCF usually outperforms both UCF and CCF.
NASA Technical Reports Server (NTRS)
1987-01-01
A compact, lightweight electrolytic water filter generates silver ions in concentrations of 50 to 100 parts per billion in the water flow system. Silver ions serve as effective bactericide/deodorizers. Ray Ward requested and received from NASA a technical information package on the Shuttle filter, and used it as basis for his own initial development, a home use filter.
Orbit determination via adaptive Gaussian swarm optimization
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Pourtakdoust, Seid H.
2015-02-01
Accurate orbit determination (OD) is vital for every space mission. This paper proposes a novel heuristic filter based on adaptive sample-size Gaussian swarm optimization (AGSF). The proposed estimator considers the OD as a stochastic dynamic optimization problem that utilizes a swarm of particles in order to find the best estimation at every time step. One of the key contributions of this paper is the adaptation of the swarm size using a weighted variance approach. The proposed strategy is simulated for a low Earth orbit (LEO) OD problem utilizing geomagnetic field measurements at 700 km altitude. The performance of the proposed AGSF is verified using Monte Carlo simulation whose results are compared with other advanced sample based nonlinear filters. It is demonstrated that the adopted filter achieves about 2.5 km accuracy in position estimation that fulfills the essential requirements of accuracy and convergence time for OD problem.
Parametric Bayesian filters for nonlinear stochastic dynamical systems: a survey.
Stano, Pawe; Lendek, Zsófia; Braaksma, Jelmer; Babuska, Robert; de Keizer, Cees; den Dekker, Arnold J
2013-12-01
Nonlinear stochastic dynamical systems are commonly used to model physical processes. For linear and Gaussian systems, the Kalman filter is optimal in minimum mean squared error sense. However, for nonlinear or non-Gaussian systems, the estimation of states or parameters is a challenging problem. Furthermore, it is often required to process data online. Therefore, apart from being accurate, the feasible estimation algorithm also needs to be fast. In this paper, we review Bayesian filters that possess the aforementioned properties. Each filter is presented in an easy way to implement algorithmic form. We focus on parametric methods, among which we distinguish three types of filters: filters based on analytical approximations (extended Kalman filter, iterated extended Kalman filter), filters based on statistical approximations (unscented Kalman filter, central difference filter, Gauss-Hermite filter), and filters based on the Gaussian sum approximation (Gaussian sum filter). We discuss each of these filters, and compare them with illustrative examples. PMID:23757593
A variable step size LMS algorithm
Raymond H. Kwong; Edward W. Johnston
1992-01-01
A least-mean-square (LMS) adaptive filter with a variable step size is introduced. The step size increases or decreases as the mean-square error increases or decreases, allowing the adaptive filter to track changes in the system as well as produce a small steady state error. The convergence and steady-state behavior of the algorithm are analyzed. The results reduce to well-known results
IMPLEMENTATION OF IIR DIGITAL FILTERS IN FPGA
Anatoli Sergyienko; Volodymir Lepekha; Juri Kanevski; Przemyslaw Soltan
The problem of mapping data flow graphs (DFGs) of infinite impulse responce (IIR) filtering algorithms into application specific structure is considered. Methods of optimization of DFGs are considered for the purpose of finding IIR filter structures with the high throughput and hardware utilization. Optimization method is proposed which takes into account structural properties of FPGA, minimize its hardware volume, and
Robust evolutionary particle filter.
Havangi, R
2015-07-01
The particle filter (PF) has been widely applied for non-linear filtering owing to its ability to carry multiple hypotheses relaxing the linearity and Gaussian assumptions. However, PF is inconsistent over time due to the loss of particle diversity caused mainly by the particle depletion in resampling step and incorrect a priori knowledge of process and measurement noise. To overcome these problems, in this paper, robust evolutionary particle filter is proposed. The proposed method can work in unknown statistical noise and does not require a prior knowledge about the system. In addition, to increase diversity, a resampling process is done based on the differential evolution (DE). The effectiveness of the proposed algorithm is demonstrated through Monte Carlo simulations. The simulation results demonstrate the effectiveness of the proposed method. PMID:25669842
Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory
2009-01-01
Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.
Haldipur, Gaurang B. (Monroeville, PA); Dilmore, William J. (Murrysville, PA)
1992-01-01
A vertical vessel having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas.
Haldipur, G.B.; Dilmore, W.J.
1992-09-01
A vertical vessel is described having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas. 18 figs.
Recursive Implementations of the Consider Filter
NASA Technical Reports Server (NTRS)
Zanetti, Renato; DSouza, Chris
2012-01-01
One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.
Markopoulou, Athina
the problem of blocking malicious traffic on the Internet, via source-based filtering. In particular, we significant benefit in practice. Index Terms-- Network Security, Internet, Filtering, Cluster- ing Algorithms, such as scanning, malicious code propagation, spam, and distributed denial-of-service (DDoS) attacks
NASA Astrophysics Data System (ADS)
Levan, Paul Thanh-Phong
The characteristics of the transmitted Bremsstrahlung spectrum of 50, 75, and 100 keV endpoint energies are evaluated using the recently reported photon attenuation coefficients. The peak energy and full width at half maximum of the beam, which is considered a measure of the monochromatization of the poly-energetic beam, are evaluated. In these evaluations, the characteristic X- rays of the anode are not considered. In general, it is noticed that the peak energy of the bremsstrahlung spectrum increases and the full width at half maximum decreases. The effect of K edge is seen clearly on the transmitted spectra for Cu to Pb filters. Filters of Al, Cu, Ag, and Au are evaluated by passing different energies from a diagnostic x-ray unit through different thicknesses of these filters. Evaluation is based on two separate criteria. The amount of energy, which passes through the filter, measured by an ion chamber, and the enhanced contrast differences, measured by film densities. Both of these measurements were taken through low, medium, and high atomic number materials and the data is compared. The filter material and thicknesses have the expected effect on the energy of the beam. Higher atomic number filters and greater filter thicknesses both reduce the overall transmitted energy. The film contrast data shows the different effect beam filter materials can have on film contrast differences within a specific object (e.g. lung) and film contrast differences between different objects (e.g. lung and bone). Different filter types and thicknesses may be used to achieve better film contrast depending on the type of object (i.e. density, atomic number) and the thickness of the object being imaged. The present analysis suggests that, for diagnostic radiology, depending on the endpoint energy of the spectrum, better monochomatization (narrower width) of the bremsstrahlung beam and smaller surface dose can be achieved by carefully choosing proper metallic filters other than Al.
P. Wendt; E. Coyle
1986-01-01
The median and other rank-order operators possess two properties called the threshold decomposition and the stacking properties. The first is a limited superposition property which leads to a new architecture for these filters; the second is an ordering property which allows an efficient VLSI implementation of the threshold decomposition architecture. Motivated by the success of rank-order filters in a wide
Shaath, Nadim A
2010-04-01
The chemistry, photostability and mechanism of action of ultraviolet filters are reviewed. The worldwide regulatory status of the 55 approved ultraviolet filters and their optical properties are documented. The photostabilty of butyl methoxydibenzoyl methane (avobenzone) is considered and methods to stabilize it in cosmetic formulations are presented. PMID:20354639
Aquatic Plants Aid Sewage Filter
NASA Technical Reports Server (NTRS)
Wolverton, B. C.
1985-01-01
Method of wastewater treatment combines micro-organisms and aquatic plant roots in filter bed. Treatment occurs as liquid flows up through system. Micro-organisms, attached themselves to rocky base material of filter, act in several steps to decompose organic matter in wastewater. Vascular aquatic plants (typically, reeds, rushes, cattails, or water hyacinths) absorb nitrogen, phosphorus, other nutrients, and heavy metals from water through finely divided roots.
NASA Astrophysics Data System (ADS)
Gardezi, A.; Qureshi, T.; Alkandri, A.; Young, R. C. D.; Birch, P. M.; Chatwin, C. R.
2015-03-01
A spatial domain optimal trade-off Maximum Average Correlation Height (OT-MACH) filter has been previously developed and shown to have advantages over frequency domain implementations in that it can be made locally adaptive to spatial variations in the input image background clutter and normalised for local intensity changes. In this paper we compare the performance of the spatial domain (SPOT-MACH) filter to the widely applied data driven technique known as the Scale Invariant Feature Transform (SIFT). The SPOT-MACH filter is shown to provide more robust recognition performance than the SIFT technique for demanding images such as scenes in which there are large illumination gradients. The SIFT method depends on reliable local edge-based feature detection over large regions of the image plane which is compromised in some of the demanding images we examined for this work. The disadvantage of the SPOTMACH filter is its numerically intensive nature since it is template based and is implemented in the spatial domain.
Next Steps 27 September 2002 Michael W. Vannier NCI - B IP Action items • P ut agenda on website and link slide presentations • E nroll attendees on archive-comm-l listserver • U pdate links to database projects on BIP webpage • P repare reports for
Huang, Lianjie
2013-10-29
Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Results from various data input to the method indicate significant improvements are provided in both image quality and resolution.
Filtering in SPECT Image Reconstruction.
Lyra, Maria; Ploussi, Agapi
2011-01-01
Single photon emission computed tomography (SPECT) imaging is widely implemented in nuclear medicine as its clinical role in the diagnosis and management of several diseases is, many times, very helpful (e.g., myocardium perfusion imaging). The quality of SPECT images are degraded by several factors such as noise because of the limited number of counts, attenuation, or scatter of photons. Image filtering is necessary to compensate these effects and, therefore, to improve image quality. The goal of filtering in tomographic images is to suppress statistical noise and simultaneously to preserve spatial resolution and contrast. The aim of this work is to describe the most widely used filters in SPECT applications and how these affect the image quality. The choice of the filter type, the cut-off frequency and the order is a major problem in clinical routine. In many clinical cases, information for specific parameters is not provided, and findings cannot be extrapolated to other similar SPECT imaging applications. A literature review for the determination of the mostly used filters in cardiac, brain, bone, liver, kidneys, and thyroid applications is also presented. As resulting from the overview, no filter is perfect, and the selection of the proper filters, most of the times, is done empirically. The standardization of image-processing results may limit the filter types for each SPECT examination to certain few filters and some of their parameters. Standardization, also, helps in reducing image processing time, as the filters and their parameters must be standardised before being put to clinical use. Commercial reconstruction software selections lead to comparable results interdepartmentally. The manufacturers normally supply default filters/parameters, but these may not be relevant in various clinical situations. After proper standardisation, it is possible to use many suitable filters or one optimal filter. PMID:21760768
Filtering in SPECT Image Reconstruction
Lyra, Maria; Ploussi, Agapi
2011-01-01
Single photon emission computed tomography (SPECT) imaging is widely implemented in nuclear medicine as its clinical role in the diagnosis and management of several diseases is, many times, very helpful (e.g., myocardium perfusion imaging). The quality of SPECT images are degraded by several factors such as noise because of the limited number of counts, attenuation, or scatter of photons. Image filtering is necessary to compensate these effects and, therefore, to improve image quality. The goal of filtering in tomographic images is to suppress statistical noise and simultaneously to preserve spatial resolution and contrast. The aim of this work is to describe the most widely used filters in SPECT applications and how these affect the image quality. The choice of the filter type, the cut-off frequency and the order is a major problem in clinical routine. In many clinical cases, information for specific parameters is not provided, and findings cannot be extrapolated to other similar SPECT imaging applications. A literature review for the determination of the mostly used filters in cardiac, brain, bone, liver, kidneys, and thyroid applications is also presented. As resulting from the overview, no filter is perfect, and the selection of the proper filters, most of the times, is done empirically. The standardization of image-processing results may limit the filter types for each SPECT examination to certain few filters and some of their parameters. Standardization, also, helps in reducing image processing time, as the filters and their parameters must be standardised before being put to clinical use. Commercial reconstruction software selections lead to comparable results interdepartmentally. The manufacturers normally supply default filters/parameters, but these may not be relevant in various clinical situations. After proper standardisation, it is possible to use many suitable filters or one optimal filter. PMID:21760768
Particle flow for nonlinear filters with log-homotopy
Fred Daum; Jim Huang
2008-01-01
We describe a new nonlinear filter that is vastly superior to the classic particle filter. In particular, the computational complexity of the new filter is many orders of magnitude less than the classic particle filter with optimal estimation accuracy for problems with dimension greater than 2 or 3. We consider nonlinear estimation problems with dimensions varying from 1 to 20
Initial Ares I Bending Filter Design
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark
2007-01-01
The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Kim, S.
2011-10-01
Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC) methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP), is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF) and the sequential importance resampling (SIR) particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
Holographic Photopolymer Linear Variable Filter with Enhanced Blue Reflection
2015-01-01
A single beam one-step holographic interferometry method was developed to fabricate porous polymer structures with controllable pore size and location to produce compact graded photonic bandgap structures for linear variable optical filters. This technology is based on holographic polymer dispersed liquid crystal materials. By introducing a forced internal reflection, the optical reflection throughout the visible spectral region, from blue to red, is high and uniform. In addition, the control of the bandwidth of the reflection resonance, related to the light intensity and spatial porosity distributions, was investigated to optimize the optical performance. The development of portable and inexpensive personal health-care and environmental multispectral sensing/imaging devices will be possible using these filters. PMID:24517443
Holographic photopolymer linear variable filter with enhanced blue reflection.
Moein, Tania; Ji, Dengxin; Zeng, Xie; Liu, Ke; Gan, Qiaoqiang; Cartwright, Alexander N
2014-03-12
A single beam one-step holographic interferometry method was developed to fabricate porous polymer structures with controllable pore size and location to produce compact graded photonic bandgap structures for linear variable optical filters. This technology is based on holographic polymer dispersed liquid crystal materials. By introducing a forced internal reflection, the optical reflection throughout the visible spectral region, from blue to red, is high and uniform. In addition, the control of the bandwidth of the reflection resonance, related to the light intensity and spatial porosity distributions, was investigated to optimize the optical performance. The development of portable and inexpensive personal health-care and environmental multispectral sensing/imaging devices will be possible using these filters. PMID:24517443
Lp norm design of stack filters.
Savin, C E; Ahmad, M O; Swamy, M N
1999-01-01
This paper addresses the problem of designing optimal stack filters by employing an Lp norm of the error between the desired signal and the estimated one. It is shown that the Lp norm can be expressed as a linear function of the decision errors at the binary levels of the filter. Thus, an Lp-optimal stack filter can be determined as the solution of a linear program. The conventional design of using the mean absolute error (MAE), therefore, becomes a special ease of the general Lp norm-based design developed here. Other special cases of the proposed approach, of particular interest in signal processing, are the problems of optimal mean square error (p=2) and minimax (p-->infinity) stack filtering. Since an Linfinity optimization is a combinatorial problem, with its complexity increasing faster than exponentially with the filter size, the proposed Lp norm approach to stack filter design offers an additional benefit of a sound mathematical framework to obtain a practical engineering approximation to the solution of the minimax optimization problem. The conventional MAE design of an important subclass of stack filters, the weighted order statistic filters, is also extended to the Lp norm-based design. By considering a typical application of restoring images corrupted with impulsive noise, several design examples are presented, to illustrate the performance of the Lp-optimal stack filters with different values of p. Simulation results show that the Lp-optimal stack filters with p=or>2 provide a better performance in terms of their capability in removing impulsive noise, compared to that achieved by using the conventional minimum MAE stack filters. PMID:18267450
Time optimal, parameters-insensitive digital controller for DC-DC buck converters
A. Costabeber; L. Corradini; P. Mattavelli; S. Saggini
2008-01-01
In this paper a digital control approach is investigated for time-optimal load step response of DC-DC synchronous buck converters intended for point-of-load applications employing low-ESR ceramic output capacitors. Unlike previously reported approaches, the proposed technique is insensitive to the power stage parameters, as its operation does not rely on the knowledge of the output filter inductance or capacitance. The time-optimal
M. S. Yarlykov; K. K. Skogorev
2008-01-01
Methods of the Markovian estimation theory are applied to the synthesis of optimal and quasi-optimal algorithms that involve\\u000a time-stepped reassignment of the values of the parameters of conditional a posteriori probability densities of filtered continuous\\u000a processes. The algorithms are intended for nonlinear processing of a vector discrete-continuous Markovian random process whose\\u000a continuous component is a vector diffusion Markovian process and
A fast algorithm for designing stack filters.
Yoo, J; Fong, K L; Huang, J J; Coyle, E J; Adams, G B
1999-01-01
Stack filters are a class of nonlinear filters with excellent properties for signal restoration. Unfortunately, present algorithms for designing stack filters can only be used for small window sizes because of either their computational overhead or their serial nature. This paper presents a new adaptive algorithm for determining a stack filter that minimizes the mean absolute error criterion. The new algorithm retains the iterative nature of many current adaptive stack filtering algorithms, but significantly reduces the number of iterations required to converge to an optimal filter. This algorithm is faster than all currently available stack filter design algorithms, is simple to implement, and is shown in this paper to always converge to an optimal stack filter. Extensive comparisons between this new algorithm and all existing algorithms are provided. The comparisons are based both on the performance of the resulting filters and upon the time and space complexity of the algorithms. They demonstrate that the new algorithm has three advantages: it is faster than all other available algorithms; it can be used on standard workstations (SPARC 5 with 48 MB) to design filters with windows containing 20 or more points; and, its highly parallel structure allows very fast implementations on parallel machines. This new algorithm allows cascades of stack filters to be designed; stack filters with windows containing 72 points have been designed in a matter of minutes under this new approach. PMID:18267517
Fuel And Oxidizer Filters For The Galileo Spacecraft
NASA Technical Reports Server (NTRS)
Jan, Darrell L.; Guernsey, Carl S.; Callas, John L.
1993-01-01
Report describes experimental and theoretical studies of filters in propellant streams of propulsion system in Galileo spacecraft. Studies contributed to base of information useful in optimizing design of filters in propulsion systems of future spacecraft.
Tunable Imaging Filters in Astronomy
J. Bland-Hawthorn
2000-06-05
While tunable filters are a recent development in night time astronomy, they have long been used in other physical sciences, e.g. solar physics, remote sensing and underwater communications. With their ability to tune precisely to a given wavelength using a bandpass optimized for the experiment, tunable filters are already producing some of the deepest narrowband images to date of astrophysical sources. Furthermore, some classes of tunable filters can be used in fast telescope beams and therefore allow for narrowband imaging over angular fields of more than a degree over the sky.
A new compact microstrip stacked-SIR bandpass filter with transmission zeros
Eric Shih; Jen-Tsai Kuo
2003-01-01
This paper presents a new class of bandpass filters based on a stacked-SIR (stepped impedance resonators) configuration. By stacking multi-coupled SIR's, the longitudinal dimension of the resulting filter is not increased despite the increase of the filter order, so that the area of the whole filter is very compact. The use of SIR's assures that the filters have a wide
Deconvolution filtering: Temporal smoothing revisited
Bush, Keith; Cisler, Josh
2014-01-01
Inferences made from analysis of BOLD data regarding neural processes are potentially confounded by multiple competing sources: cardiac and respiratory signals, thermal effects, scanner drift, and motion-induced signal intensity changes. To address this problem, we propose deconvolution filtering, a process of systematically deconvolving and reconvolving the BOLD signal via the hemodynamic response function such that the resultant signal is composed of maximally likely neural and neurovascular signals. To test the validity of this approach, we compared the accuracy of BOLD signal variants (i.e., unfiltered, deconvolution filtered, band-pass filtered, and optimized band-pass filtered BOLD signals) in identifying useful properties of highly confounded, simulated BOLD data: (1) reconstructing the true, unconfounded BOLD signal, (2) correlation with the true, unconfounded BOLD signal, and (3) reconstructing the true functional connectivity of a three-node neural system. We also tested this approach by detecting task activation in BOLD data recorded from healthy adolescent girls (control) during an emotion processing task. Results for the estimation of functional connectivity of simulated BOLD data demonstrated that analysis (via standard estimation methods) using deconvolution filtered BOLD data achieved superior performance to analysis performed using unfiltered BOLD data and was statistically similar to well-tuned band-pass filtered BOLD data. Contrary to band-pass filtering, however, deconvolution filtering is built upon physiological arguments and has the potential, at low TR, to match the performance of an optimal band-pass filter. The results from task estimation on real BOLD data suggest that deconvolution filtering provides superior or equivalent detection of task activations relative to comparable analyses on unfiltered signals and also provides decreased variance over the estimate. In turn, these results suggest that standard preprocessing of the BOLD signal ignores significant sources of noise that can be effectively removed without damaging the underlying signal. PMID:24768215
Testing Dual Rotary Filters - 12373
Herman, D.T.; Fowley, M.D.; Stefanko, D.B.; Shedd, D.A.; Houchens, C.L.
2012-07-01
The Savannah River National Laboratory (SRNL) installed and tested two hydraulically connected SpinTek{sup R} Rotary Micro-filter units to determine the behavior of a multiple filter system and develop a multi-filter automated control scheme. Developing and testing the control of multiple filters was the next step in the development of the rotary filter for deployment. The test stand was assembled using as much of the hardware planned for use in the field including instrumentation and valving. The control scheme developed will serve as the basis for the scheme used in deployment. The multi filter setup was controlled via an Emerson DeltaV control system running version 10.3 software. Emerson model MD controllers were installed to run the control algorithms developed during this test. Savannah River Remediation (SRR) Process Control Engineering personnel developed the software used to operate the process test model. While a variety of control schemes were tested, two primary algorithms provided extremely stable control as well as significant resistance to process upsets that could lead to equipment interlock conditions. The control system was tuned to provide satisfactory response to changing conditions during the operation of the multi-filter system. Stability was maintained through the startup and shutdown of one of the filter units while the second was still in operation. The equipment selected for deployment, including the concentrate discharge control valve, the pressure transmitters, and flow meters, performed well. Automation of the valve control integrated well with the control scheme and when used in concert with the other control variables, allowed automated control of the dual rotary filter system. Experience acquired on a multi-filter system behavior and with the system layout during this test helped to identify areas where the current deployment rotary filter installation design could be improved. Completion of this testing provides the necessary information on the control and system behavior that will be used in deployment on actual waste. (authors)
Reach Your Goals, Step by Step
Reach Your Goals, Step by Step Participants set goals for eating more fruits and vegetables as a "warm-up" and a "cool-down." The four sessions in this kit are: Session 1Reach Your Goals, Step by Step activity most days 1 2 #12;Reach Your Goals, Step by Step Objectives for Session 1 n Describe three
NASA Astrophysics Data System (ADS)
Tse, Peter W.; Wang, Dong
2013-11-01
Rolling element bearings are the most important components used in machinery. Bearing faults, once they have developed, quickly become severe and can result in fatal breakdowns. Envelope spectrum analysis is one effective approach to detect early bearing faults through the identification of bearing fault characteristic frequencies (BFCFs). To achieve this, it is necessary to find a band-pass filter to retain a resonant frequency band for the enhancement of weak bearing fault signatures. In Part 1 paper, the wavelet packet filters with fixed center frequencies and bandwidths used in a sparsogram may not cover a whole bearing resonant frequency band. Besides, a bearing resonant frequency band may be split into two adjacent imperfect orthogonal frequency bands, which reduce the bearing fault features. Considering the above two reasons, a sparsity measurement based optimal wavelet filter is required to be designed for providing more flexible center frequency and bandwidth for covering a bearing resonant frequency band. Part 2 paper presents an automatic selection process for finding the optimal complex Morlet wavelet filter with the help of genetic algorithm that maximizes the sparsity measurement value. Then, the modulus of the wavelet coefficients obtained by the optimal wavelet filter is used to extract the envelope. Finally, a non-linear function is introduced to enhance the visual inspection ability of BFCFs. The convergence of the optimal filter is fastened by the center frequencies and bandwidths of the optimal wavelet packet nodes established by the new sparsogram. Previous case studies including a simulated bearing fault signal and real bearing fault signals were used to show that the effectiveness of the optimal wavelet filtering method in detecting bearing faults. Finally, the results obtained from comparison studies are presented to verify that the proposed method is superior to the other three popular methods.
Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering
NASA Astrophysics Data System (ADS)
Bruno, Marcelo G. S.; Dias, Stiven S.
2014-12-01
We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
An approach to the approximation problem for nonrecursive digital filters
LAWRENCE R. RABINER; BERNARD GOLD; C. McGonegal
1970-01-01
A direct design procedure for nonrecursive digital filters, based primarily on the frequency-response characteristic of the desired filters, is presented. An optimization technique is used to minimize the maximum deviation of the synthesized filter from the ideal filter over some frequence range. Using this frequency-sampling technique, a wide variety of low-pass and bandpass filters have been designed, as well as
Angus T. Bryant; Xiaosong Kang; Enrico Santi; Patrick R. Palmer; Jerry L. Hudgins
2006-01-01
A practical and accurate parameter extraction method is presented for the Fourier-based-solution physics-based insulated gate bipolar transistor (IGBT) and power diode models. The goal is to obtain a model accurate enough to allow switching loss prediction under a variety of operating conditions. In the first step of the extraction procedure, only one simple clamped inductive load test is needed for
Arabi Jeshvaghani, R.; Zohdi, H.; Shahverdi, H.R.; Bozorg, M.; Hadavi, S.M.M.
2012-11-15
Multi-step heat treatments comprise of high temperature forming (150 Degree-Sign C/24 h plus 190 Degree-Sign C for several minutes) and subsequent low temperature forming (120 Degree-Sign C for 24 h) is developed in creep age forming of 7075 aluminum alloy to decrease springback and exfoliation corrosion susceptibility without reduction in tensile properties. The results show that the multi-step heat treatment gives the low springback and the best combination of exfoliation corrosion resistance and tensile strength. The lower springback is attributed to the dislocation recovery and more stress relaxation at higher temperature. Transmission electron microscopy observations show that corrosion resistance is improved due to the enlargement in the size and the inter-particle distance of the grain boundaries precipitates. Furthermore, the achievement of the high strength is related to the uniform distribution of ultrafine {eta} Prime precipitates within grains. - Highlights: Black-Right-Pointing-Pointer Creep age forming developed for manufacturing of aircraft wing panels by aluminum alloy. Black-Right-Pointing-Pointer A good combination of properties with minimal springback is required in this component. Black-Right-Pointing-Pointer This requirement can be improved through the appropriate heat treatments. Black-Right-Pointing-Pointer Multi-step cycles developed in creep age forming of AA7075 for improving of springback and properties. Black-Right-Pointing-Pointer Results indicate simultaneous enhancing the properties and shape accuracy (lower springback).
Kuban, Daniel P. (Oak Ridge, TN); Singletary, B. Huston (Oak Ridge, TN); Evans, John H. (Rockwood, TN)
1984-01-01
A plurality of holding tubes are respectively mounted in apertures in a partition plate fixed in a housing receiving gas contaminated with particulate material. A filter cartridge is removably held in each holding tube, and the cartridges and holding tubes are arranged so that gas passes through apertures therein and across the partition plate while particulate material is collected in the cartridges. Replacement filter cartridges are respectively held in holding canisters mounted on a support plate which can be secured to the aforesaid housing, and screws mounted on said canisters are arranged to push replacement cartridges into the cartridge holding tubes and thereby eject used cartridges therefrom.
Jackson, R.E.; Sparks, J.E.
1981-03-03
An air filter is described that has a counter rotating drum, i.e., the rotation of the drum is opposite the tangential intake of air. The intake air has about 1 lb of rock wool fibers per 107 cu. ft. of air sometimes at about 100% relative humidity. The fibers are doffed from the drum by suction nozzle which are adjacent to the drum at the bottom of the filter housing. The drum screen is cleaned by periodically jetting hot dry air at 120 psig through the screen into the suction nozzles.
NASA Technical Reports Server (NTRS)
1988-01-01
Seeking to find a more effective method of filtering potable water that was highly contaminated, Mike Pedersen, founder of Western Water International, learned that NASA had conducted extensive research in methods of purifying water on board manned spacecraft. The key is Aquaspace Compound, a proprietary WWI formula that scientifically blends various types of glandular activated charcoal with other active and inert ingredients. Aquaspace systems remove some substances; chlorine, by atomic adsorption, other types of organic chemicals by mechanical filtration and still others by catalytic reaction. Aquaspace filters are finding wide acceptance in industrial, commercial, residential and recreational applications in the U.S. and abroad.
Accurate stereo matching by two-step energy minimization.
Mozerov, Mikhail G; van de Weijer, Joost
2015-03-01
In stereo matching, cost-filtering methods and energy-minimization algorithms are considered as two different techniques. Due to their global extent, energy-minimization methods obtain good stereo matching results. However, they tend to fail in occluded regions, in which cost-filtering approaches obtain better results. In this paper, we intend to combine both the approaches with the aim to improve overall stereo matching results.We show that a global optimization with a fully connected model can be solved by cost-filtering methods. Based on this observation, we propose to perform stereo matching as a two-step energy-minimization algorithm. We consider two Markov random field (MRF) models: 1) a fully connected model defined on the complete set of pixels in an image and 2) a conventional locally connected model. We solve the energy-minimization problem for the fully connected model, after which the marginal function of the solution is used as the unary potential in the locally connected MRF model. Experiments on the Middlebury stereo data sets show that the proposed method achieves the state-of-the-arts results. PMID:25622319
The intractable cigarette ‘filter problem’
2011-01-01
Background When lung cancer fears emerged in the 1950s, cigarette companies initiated a shift in cigarette design from unfiltered to filtered cigarettes. Both the ineffectiveness of cigarette filters and the tobacco industry's misleading marketing of the benefits of filtered cigarettes have been well documented. However, during the 1950s and 1960s, American cigarette companies spent millions of dollars to solve what the industry identified as the ‘filter problem’. These extensive filter research and development efforts suggest a phase of genuine optimism among cigarette designers that cigarette filters could be engineered to mitigate the health hazards of smoking. Objective This paper explores the early history of cigarette filter research and development in order to elucidate why and when seemingly sincere filter engineering efforts devolved into manipulations in cigarette design to sustain cigarette marketing and mitigate consumers' concerns about the health consequences of smoking. Methods Relevant word and phrase searches were conducted in the Legacy Tobacco Documents Library online database, Google Patents, and media and medical databases including ProQuest, JSTOR, Medline and PubMed. Results 13 tobacco industry documents were identified that track prominent developments involved in what the industry referred to as the ‘filter problem’. These reveal a period of intense focus on the ‘filter problem’ that persisted from the mid-1950s to the mid-1960s, featuring collaborations between cigarette producers and large American chemical and textile companies to develop effective filters. In addition, the documents reveal how cigarette filter researchers' growing scientific knowledge of smoke chemistry led to increasing recognition that filters were unlikely to offer significant health protection. One of the primary concerns of cigarette producers was to design cigarette filters that could be economically incorporated into the massive scale of cigarette production. The synthetic plastic cellulose acetate became the fundamental cigarette filter material. By the mid-1960s, the meaning of the phrase ‘filter problem’ changed, such that the effort to develop effective filters became a campaign to market cigarette designs that would sustain the myth of cigarette filter efficacy. Conclusions This study indicates that cigarette designers at Philip Morris, British-American Tobacco, Lorillard and other companies believed for a time that they might be able to reduce some of the most dangerous substances in mainstream smoke through advanced engineering of filter tips. In their attempts to accomplish this, they developed the now ubiquitous cellulose acetate cigarette filter. By the mid-1960s cigarette designers realised that the intractability of the ‘filter problem’ derived from a simple fact: that which is harmful in mainstream smoke and that which provides the smoker with ‘satisfaction’ are essentially one and the same. Only in the wake of this realisation did the agenda of cigarette designers appear to transition away from mitigating the health hazards of smoking and towards the perpetuation of the notion that cigarette filters are effective in reducing these hazards. Filters became a marketing tool, designed to keep and recruit smokers as consumers of these hazardous products. PMID:21504917
Foan, L; Simon, V
2012-09-21
A factorial design was used to optimize the extraction of polycyclic aromatic hydrocarbons (PAHs) from mosses, plants used as biomonitors of air pollution. The analytical procedure consists of pressurized liquid extraction (PLE) followed by solid-phase extraction (SPE) cleanup, in association with analysis by high performance liquid chromatography coupled with fluorescence detection (HPLC-FLD). For method development, homogeneous samples were prepared with large quantities of the mosses Isothecium myosuroides Brid. and Hypnum cupressiforme Hedw., collected from a Spanish Nature Reserve. A factorial design was used to identify the optimal PLE operational conditions: 2 static cycles of 5 min at 80 °C. The analytical procedure performed with PLE showed similar recoveries (?70%) and total PAH concentrations (?200 ng g(-1)) as found using Soxtec extraction, with the advantage of reducing solvent consumption by 3 (30 mL against 100mL per sample), and taking a fifth of the time (24 samples extracted automatically in 8h against 2 samples in 3.5h). The performance of SPE normal phases (NH(2), Florisil, silica and activated aluminium) generally used for organic matrix cleanup was also compared. Florisil appeared to be the most selective phase and ensured the highest PAH recoveries. The optimal analytical procedure was validated with a reference material and applied to moss samples from a remote Spanish site in order to determine spatial and inter-species variability. PMID:22885040
NASA Technical Reports Server (NTRS)
Shelton, G. B. (inventor)
1977-01-01
A notch filter for the selective attenuation of a narrow band of frequencies out of a larger band was developed. A helical resonator is connected to an input circuit and an output circuit through discrete and equal capacitors, and a resistor is connected between the input and the output circuits.
Tom Kehler, fishery biologist at the U.S. Fish and Wildlife Service's Northeast Fishery Center in Lamar, Pennsylvania, checks the flow rate of water leaving a phosphorus filter column. The USGS has pioneered a new use for acid mine drainage residuals that are currently a disposal challenge, usi...
Locally Adaptive Techniques For Stack Filtering
Doina Petrescu; Ioan Tabus; Moncef Gabbouj
This paper introduces a new structure for stack filtering,where the filter adapts to the local characteristicsencountered in data. Both supervised and unsupervisedtechniques for optimal design are investigated. We splitthe image into small regions and select the stack filterto process each region according to the spatial domainor threshold level domain characteristics of the inputsignal. This method provides a significant improvementpotential over
Sub-wavelength efficient polarization filter (SWEP filter)
Simpson, Marcus L.; Simpson, John T.
2003-12-09
A polarization sensitive filter includes a first sub-wavelength resonant grating structure (SWS) for receiving incident light, and a second SWS. The SWS are disposed relative to one another such that incident light which is transmitted by the first SWS passes through the second SWS. The filter has a polarization sensitive resonance, the polarization sensitive resonance substantially reflecting a first polarization component of incident light while substantially transmitting a second polarization component of the incident light, the polarization components being orthogonal to one another. A method for forming polarization filters includes the steps of forming first and second SWS, the first and second SWS disposed relative to one another such that a portion of incident light applied to the first SWS passes through the second SWS. A method for separating polarizations of light, includes the steps of providing a filter formed from a first and second SWS, shining incident light having orthogonal polarization components on the first SWS, and substantially reflecting one of the orthogonal polarization components while substantially transmitting the other orthogonal polarization component. A high Q narrowband filter includes a first and second SWS, the first and second SWS are spaced apart a distance being at least one half an optical wavelength.
Kamali, Hossein; Aminimoghadamfarouj, Noushin; Golmakani, Ebrahim; Nematollahi, Alireza
2015-01-01
Aim: The aim of this study was to examine and evaluate crucial variables in essential oils extraction process from Lavandula hybrida through static-dynamic and semi-continuous techniques using response surface method. Materials and Methods: Essential oil components were extracted from Lavandula hybrida (Lavandin) flowers using supercritical carbon dioxide via static-dynamic steps (SDS) procedure, and semi-continuous (SC) technique. Results: Using response surface method the optimum extraction yield (4.768%) was obtained via SDS at 108.7 bar, 48.5°C, 120 min (static: 8×15), 24 min (dynamic: 8×3 min) in contrast to the 4.620% extraction yield for the SC at 111.6 bar, 49.2°C, 14 min (static), 121.1 min (dynamic). Conclusion: The results indicated that a substantial reduction (81.56%) solvent usage (kg CO2/g oil) is observed in the SDS method versus the conventional SC method. PMID:25598636
Efficient object tracking by incremental self-tuning particle filtering on the affine group.
Li, Min; Tan, Tieniu; Chen, Wei; Huang, Kaiqi
2012-03-01
We propose an incremental self-tuning particle filtering (ISPF) framework for visual tracking on the affine group, which can find the optimal state in a chainlike way with a very small number of particles. Unlike traditional particle filtering, which only relies on random sampling for state optimization, ISPF incrementally draws particles and utilizes an online-learned pose estimator (PE) to iteratively tune them to their neighboring best states according to some feedback appearance-similarity scores. Sampling is terminated if the maximum similarity of all tuned particles satisfies a target-patch similarity distribution modeled online or if the permitted maximum number of particles is reached. With the help of the learned PE and some appearance-similarity feedback scores, particles in ISPF become "smart" and can automatically move toward the correct directions; thus, sparse sampling is possible. The optimal state can be efficiently found in a step-by-step way in which some particles serve as bridge nodes to help others to reach the optimal state. In addition to the single-target scenario, the "smart" particle idea is also extended into a multitarget tracking problem. Experimental results demonstrate that our ISPF can achieve great robustness and very high accuracy with only a very small number of particles. PMID:21965203
NASA Astrophysics Data System (ADS)
Li, Ke; Gomez-Cardona, Daniel; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong
2015-03-01
Optimal selections of tube potential (kV) and tube current (mA) are essential in maximizing the diagnostic potential of a given CT technology while minimizing radiation dose. The use of a lower tube potential may improve image contrast, but may also require a significantly higher tube current to compensate for the rapid decrease of tube output at lower tube potentials. Therefore, the selection of kV and mA should take those kinds of constraints as well as the specific diagnostic imaging task in to consideration. For conventional quasi-linear CT systems employing linear filtered back-projection (FBP) image reconstruction algorithm, the optimization of kV-mA combinations are relatively straightforward, as neither spatial resolution nor noise texture has significant dependence on kV and mA settings. In these cases, zero-frequency analysis such as contrast-to-noise ratio (CNR) or normalized CNR by dose (CNRD) can be used for optimal kV-mA selection. The recently introduced statistical model-based iterative reconstruction (MBIR) method, however, has introduced new challenges to optimal kV and mA selection, as both spatial resolution and noise texture become closely correlated with kV and mA. In this work, a task-based approach based on modern signal detection theory and the corresponding frequency-dependent analysis has been proposed to perform the kV and mA optimization for both FBP and MBIR. By performing exhaustive measurements of task-based detectability index through the technically accessible kV-mA parameter space, iso-detectability contours were generated and overlaid on top of iso-dose contours, from which the kV-mA pair that minimize dose yet still achieving the desired detectability level can be identified.
Passmore, Brandon Scott; Shaner, Eric Arthur; Barrick, Todd A.
2009-09-01
Metal films perforated with subwavelength hole arrays have been show to demonstrate an effect known as Extraordinary Transmission (EOT). In EOT devices, optical transmission passbands arise that can have up to 90% transmission and a bandwidth that is only a few percent of the designed center wavelength. By placing a tunable dielectric in proximity to the EOT mesh, one can tune the center frequency of the passband. We have demonstrated over 1 micron of passive tuning in structures designed for an 11 micron center wavelength. If a suitable midwave (3-5 micron) tunable dielectric (perhaps BaTiO{sub 3}) were integrated with an EOT mesh designed for midwave operation, it is possible that a fast, voltage tunable, low temperature filter solution could be demonstrated with a several hundred nanometer passband. Such an element could, for example, replace certain components in a filter wheel solution.
NASA Technical Reports Server (NTRS)
1982-01-01
A compact, lightweight electrolytic water sterilizer available through Ambassador Marketing, generates silver ions in concentrations of 50 to 100 parts per billion in water flow system. The silver ions serve as an effective bactericide/deodorizer. Tap water passes through filtering element of silver that has been chemically plated onto activated carbon. The silver inhibits bacterial growth and the activated carbon removes objectionable tastes and odors caused by addition of chlorine and other chemicals in municipal water supply. The three models available are a kitchen unit, a "Tourister" unit for portable use while traveling and a refrigerator unit that attaches to the ice cube water line. A filter will treat 5,000 to 10,000 gallons of water.
K. Zahedi; J. C. Alexander; P. B. Zieve
1985-01-01
Electrified filter bed apparatus includes inner and outer cylindrical bed-retaining structures for confining a granular bed therebetween. The inner cylindrical structure may comprise a cage of superposed frusto-conical louvers and the outer structure may comprise a similar cage or a perforated cylindrical, liquid-drainage sheet. A cylindrical bed electrode for electrically charging the bed granules is suspended between the retaining structures.
Stepped horn actuated Kelvin probe
NASA Astrophysics Data System (ADS)
Reboul, J.-R.; Guasch, C.; Ferrandis, J.-Y.; Bonnet, J.
2008-02-01
We have developed an original Kelvin probe system using an ultrasonic stepped horn sonotrode. This actuator is optimized in order to maximize the velocity of the tip end, and hence to increase the Kelvin current detected. Such development is essential to improve surface potential measurements at small spatial scale.
Stepped horn actuated Kelvin probe.
Reboul, J-R; Guasch, C; Ferrandis, J-Y; Bonnet, J
2008-02-01
We have developed an original Kelvin probe system using an ultrasonic stepped horn sonotrode. This actuator is optimized in order to maximize the velocity of the tip end, and hence to increase the Kelvin current detected. Such development is essential to improve surface potential measurements at small spatial scale. PMID:18315332
Beekhuijzen, Manon; de Koning, Coco; Flores-Guillén, Maria-Eugenia; de Vries-Buitenweg, Selinda; Tobor-Kaplon, Marysia; van de Waart, Beppy; Emmen, Harry
2015-08-15
In the last couple of years, the interest in the zebrafish embryotoxicity test (ZET) for use in developmental toxicity assessment has been growing exponentially. This is also evident from the recent proposal for updating the ICHS5 guideline. The methodology of the ZET used by the different groups varies greatly. To further evaluate its successfulness and to take the ZET to the next level, harmonization of procedures is crucial. In the present study, based on literature and empirical data, the most optimal study design regarding temperature, test chamber, exposure period, presence of chorion, solvent use, exposure method, choice of concentrations, and teratogenic classification is proposed. Furthermore, our morphology scoring system is reported in detail as protocol to further enhance study design harmonization. PMID:26111580
Fischer, G; Lindner, S; Litau, S; Schirrmacher, R; Wängler, B; Wängler, C
2015-08-19
As the gastrin releasing peptide receptor (GRPR) is overexpressed on several tumor types, it represents a promising target for the specific in vivo imaging of these tumors using positron emission tomography (PET). We were able to show that PESIN-based peptide multimers can result in substantially higher GRPR avidities, highly advantageous in vivo pharmacokinetics and tumor imaging properties compared to the respective monomers. However, the minimal distance between the peptidic binders, resulting in the lowest possible system entropy while enabling a concomitant GRPR binding and thus optimized receptor avidities, has not been determined so far. Thus, we aimed here to identify the minimal distance between two GRPR-binding peptides in order to provide the basis for the development of highly avid GRPR-specific PET imaging agents. We therefore synthesized dimers of the GRPR-binding bombesin analogue BBN(7-14) on a dendritic scaffold, exhibiting different distances between both peptide binders. The homodimers were further modified with the chelator NODAGA, radiolabeled with (68)Ga, and evaluated in vitro regarding their GRPR avidity. We found that the most potent of the newly developed radioligands exhibits GRPR avidity twice as high as the most potent reference compound known so far, and that a minimal distance of 62 bond lengths between both peptidic binders within the homodimer can result in concomitant peptide binding and optimal GRPR avidities. These findings answer the question as to what molecular design should be chosen when aiming at the development of highly avid homobivalent peptidic ligands addressing the GRPR. PMID:26200324
Stochastic resonance with matched filtering
Li-Fang Li; Jian-Yang Zhu
2010-06-28
Along with the development of interferometric gravitational wave detector, we enter into an epoch of gravitational wave astronomy, which will open a brand new window for astrophysics to observe our universe. Almost all of the data analysis methods in gravitational wave detection are based on matched filtering. Gravitational wave detection is a typical example of weak signal detection, and this weak signal is buried in strong instrument noise. So it seems attractable if we can take advantage of stochastic resonance. But unfortunately, almost all of the stochastic resonance theory is based on Fourier transformation and has no relation to matched filtering. In this paper we try to relate stochastic resonance to matched filtering. Our results show that stochastic resonance can indeed be combined with matched filtering for both periodic and non-periodic input signal. This encouraging result will be the first step to apply stochastic resonance to matched filtering in gravitational wave detection. In addition, based on matched filtering, we firstly proposed a novel measurement method for stochastic resonance which is valid for both periodic and non-periodic driven signal.
NASA Astrophysics Data System (ADS)
Aomoa, N.; Bhuyan, H.; Cabrera, A. L.; Favre, M.; Diaz-Droguett, D. E.; Rojas, S.; Ferrari, P.; Srivastava, D. N.; Kakati, M.
2013-04-01
This paper reports controlled synthesis of carbon nanoparticles by an expanded thermal plasma jet assisted technique through a single-step, high-throughput process. The plasma discharge zone in the experimental reactor remained isolated from the particle nucleation/growth chamber through a supersonic nozzle, which allowed using the sample collection chamber pressure as an efficient control parameter to synthesize carbon nanostructures with tailored combination of some important properties. Low chamber pressure conditions produced samples with both good specific surface area and crystallinity, which may be ideal for use as an efficient catalyst support material as well as in batteries and super capacitors. This dominantly mesoporous sample was also found to have good hydrogen absorption properties. Another significant observation was that the average number of carbon nano-sheets stacked together inside the crumpled paper like layers increased with pressure in the sample collection chamber. Optical emission spectroscopic techniques were used to measure the effective cooling rates responsible for the particle nucleation process under different experimental conditions, which also indicated that C2 dimer molecules are the basic precursors behind the formation of these carbon nanostructures.
Immune memory clonal selection algorithms for designing stack filters
Weisheng Dong; Guangming Shi; Li Zhang
2007-01-01
Stack filters are a class of non-linear filters for suppressing the noise that is uncorrelated with the signal. Their design is formulated as a highly nonlinear optimization problem. A modified immune clonal selection algorithm, called immune memory clonal selection algorithm, is employed to perform the configuration of filters design. The new algorithm has the advantage of preventing from prematurity and
Deterministic Attitude and Pose Filtering, an Embedded Lie Groups
Trumpf, Jochen
Deterministic Attitude and Pose Filtering, an Embedded Lie Groups Approach Mohammad Zamani A thesis, and Mahony R. Near-optimal deterministic filtering on the rotation group. IEEE Transactions on Automatic, J. Trumpf, and R. Mahony. Minimum-energy pose filtering on the special euclidean group
Probability Hypothesis Density filter versus Multiple Hypothesis Tracking
Vo, Ba-Ngu
Probability Hypothesis Density filter versus Multiple Hypothesis Tracking Kusha Pantaa, Ba-Ngu Voa. ABSTRACT The probability hypothesis density (PHD) filter is a practical alternative to the optimal Bayesian the performance of the PHD filter with that of the multiple hypothesis tracking (MHT) that has been widely used
Birefringent filter design by use of a modified genetic algorithm
Yao, Jianping
Birefringent filter design by use of a modified genetic algorithm Mengtao Wen and Jianping Yao A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation of the filters. Being different from the normal genetic algorithm, the algorithm proposed reduces the problem
Series expansions of Brownian motion and the unscented particle filter
Edinburgh, University of
Series expansions of Brownian motion and the unscented particle filter October 15, 2013 Abstract The discrete-time filtering problem for nonlinear diffusion processes is computationally intractable in general. For this reason, methods such as the bootstrap filter are particularly effective at approximating the optimal
Rocket noise filtering system using digital filters
NASA Technical Reports Server (NTRS)
Mauritzen, David
1990-01-01
A set of digital filters is designed to filter rocket noise to various bandwidths. The filters are designed to have constant group delay and are implemented in software on a general purpose computer. The Parks-McClellan algorithm is used. Preliminary tests are performed to verify the design and implementation. An analog filter which was previously employed is also simulated.
Non-linear filtering Example: Median filter
Oliensis, John
Non-linear filtering Â· Example: Median filter Â· Replaces pixel value by median value over neighborhood Â· Generates no new gray levels I=(1 2 3 2 3 2 1) 2 22 3 2 #12;Median filters Advantage (?): the "odd-man-out" effect e.g. 1,1,1,7,1,1,1,1 ?,1,1,1.1,1,1,? #12;Median filters: example filter width = 5
Barfi, Behruz; Asghari, Alireza; Rajabi, Maryam; Sabzalian, Sedigheh
2015-08-15
Air-assisted liquid-liquid microextraction (AALLME) has unique capabilities to develop as an organic solvent-free and one-step microextraction method, applying ionic-liquids as extraction solvent and avoiding centrifugation step. Herein, a novel and simple eco-friendly method, termed one-step air-assisted liquid-liquid microextraction (OS-AALLME), was developed to extract some illegal azo-based dyes (including Sudan I to IV, and Orange G) from food and cosmetic products. A series of experiments were investigated to achieve the most favorable conditions (including extraction solvent: 77?L of 1-Hexyl-3-methylimidazolium hexafluorophosphate; sample pH 6.3, without salt addition; and extraction cycles: 25 during 100s of sonication) using a central composite design strategy. Under these conditions, limits of detection, linear dynamic ranges, enrichment factors and consumptive indices were in the range of 3.9-84.8ngmL(-1), 0.013-3.1?gmL(-1), 33-39, and 0.13-0.15, respectively. The results showed that -as well as its simplicity, fastness, and use of no hazardous disperser and extraction solvents- OS-AALLME is an enough sensitive and efficient method for the extraction of these dyes from complex matrices. After optimization and validation, OS-AALLME was applied to estimate the concentration of 1-amino-2-naphthol in human bio-fluids as a main reductive metabolite of selected dyes. Levels of 1-amino-2-naphthol in plasma and urinary excretion suggested that this compound may be used as a new potential biomarker of these dyes in human body. PMID:26149246
Adaptive Stack Filtering with Application to Image Processin
Lin Yia; Jaakko T. Astola; Yrjo A. Neuvo
1993-01-01
With the aid of threshold decomposition, it is shown that optimal stack filters under the mean absolute error (MAE) criterion are equal to optimal (or Bayesian) classifiers subject to stacking constraints under the mean classification error (MCE) criterion. Nonadaptive and adaptive constrained least mean absolute (LMA) algorithms are developed for the esti- mation of stack filters through the linearization of
Lu, Wu-Sheng
/Joint Optimization of Error Feedback and Realization Takao Hinamoto Graduate School of Engineering Hiroshima University Higashi-Hiroshima 739-8527, Japan Email: hinamoto@hiroshima-u.ac.jp Toru Oumi Graduate School of Engineering Hiroshima University Higashi-Hiroshima 739-8527, Japan Email: oumi@hiroshima-u.ac.jp Wu-Sheng Lu
Murphy, D
1993-01-01
This chapter describes a standard method for the hybridization of labeled DNA probes to nucleic acids bound to a nylon matrix. Filters bearing bound nucleic acids produced by Northern blotting of RNA (Chapter 39), Southern blotting of DNA (Chapter 37), and slot blotting of DNA (Chapters 35) or RNA (Chapter 40) are hybridized to labeled probes using the method described below. The advantages of this method are, first, that the use of a high concentration of SDS in the hybridization buffer ensures a low background level of nonspecific probe adherence to the membrane and, second, an extended period of filter prehybridization is not required. The inclusion of a large amount of SDS does, however, necessitate that the nucleic acids are covalently bonded to the matrix by UV light crosslinking. The inclusion of formamide (15% [v/v]) is also recommended in order to reduce the viscosity of the hybridization buffer. Formamide also has the effect of reducing the temperature of the hybridization reaction. PMID:21390694
Optical ranked-order filtering using threshold decomposition
Allebach, Jan P. (West Lafayette, IN); Ochoa, Ellen (Pleasanton, CA); Sweeney, Donald W. (Alamo, CA)
1990-01-01
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.
Optical ranked-order filtering using threshold decomposition
Allebach, J.P.; Ochoa, E.; Sweeney, D.W.
1987-10-09
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.
Nielsen, Elisabet I; Cars, Otto; Friberg, Lena E
2011-10-01
A pharmacokinetic-pharmacodynamic (PKPD) model that characterizes the full time course of in vitro time-kill curve experiments of antibacterial drugs was here evaluated in its capacity to predict the previously determined PK/PD indices. Six drugs (benzylpenicillin, cefuroxime, erythromycin, gentamicin, moxifloxacin, and vancomycin), representing a broad selection of mechanisms of action and PK and PD characteristics, were investigated. For each drug, a dose fractionation study was simulated, using a wide range of total daily doses given as intermittent doses (dosing intervals of 4, 8, 12, or 24 h) or as a constant drug exposure. The time course of the drug concentration (PK model) as well as the bacterial response to drug exposure (in vitro PKPD model) was predicted. Nonlinear least-squares regression analyses determined the PK/PD index (the maximal unbound drug concentration [fC(max)]/MIC, the area under the unbound drug concentration-time curve [fAUC]/MIC, or the percentage of a 24-h time period that the unbound drug concentration exceeds the MIC [fT(>MIC)]) that was most predictive of the effect. The in silico predictions based on the in vitro PKPD model identified the previously determined PK/PD indices, with fT(>MIC) being the best predictor of the effect for ?-lactams and fAUC/MIC being the best predictor for the four remaining evaluated drugs. The selection and magnitude of the PK/PD index were, however, shown to be sensitive to differences in PK in subpopulations, uncertainty in MICs, and investigated dosing intervals. In comparison with the use of the PK/PD indices, a model-based approach, where the full time course of effect can be predicted, has a lower sensitivity to study design and allows for PK differences in subpopulations to be considered directly. This study supports the use of PKPD models built from in vitro time-kill curves in the development of optimal dosing regimens for antibacterial drugs. PMID:21807983
Evidence-Based Used, Yet Still Controversial: The Arterial Filter
Somer, Filip De
2012-01-01
Abstract: Arterial line filters are considered by many as an essential safety measure inside a cardiopulmonary bypass circuit. There is no doubt that this was true during the bubble oxygenator era, but we can question whether the existing arterial line filter design and positioning of the filter are still optimal seeing the tremendous progress in cardiopulmonary bypass circuit components. This overview gives a critical overview of existing arterial line filter design. PMID:22730869
An online novel adaptive filter for denoising time series measurements.
Willis, Andrew J
2006-04-01
A nonstationary form of the Wiener filter based on a principal components analysis is described for filtering time series data possibly derived from noisy instrumentation. The theory of the filter is developed, implementation details are presented and two examples are given. The filter operates online, approximating the maximum a posteriori optimal Bayes reconstruction of a signal with arbitrarily distributed and non stationary statistics. PMID:16649562
ADAPTIVE FILTERS Solutions of Computer Projects
California at Los Angeles, University of
ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed Electrical Engineering Department #12;PART I: OPTIMAL ESTIMATION 1 #12;2 Part I: Optimal Estimation COMPUTER PROJECT Project .1: Linear Estimation 5 COMPUTER PROJECTS Project II.1 (Linear equalization and decision devices
Optimizing Running Performance.
ERIC Educational Resources Information Center
Widule, Carol J.
1989-01-01
The optimization of step length and step rate (frequency) is essential for sprinters. This article analyzes data that compare step rate and step length to height, as a function of running speed, for ten elite runners. How results of such analyses can be used in training runners is also discussed. (IAH)
In-place testing of off-gas iodine filters
Duce, S.W.; Tkachyk, J.W.; Motes, B.G.
1980-01-01
At the Idaho National Engineering Laboratory, both charcoal and silver zeolite (AgX) filters are used for radioactive iodine off-gas cleanup of reactor systems. These filters are used in facilities which are conducting research in the areas of reactor fuel failure, reactor fuel inspection, and loss of fluids from reactor vessels. Iodine retention efficiency testing of these filters is dictated by prudent safety practices and regulatory guidelines. A procedure for determining iodine off-gas filter efficiency in-place has been developed and tested on both AgX and charcoal filters. The procedure involves establishing sample points upstream and downstream of the filter to be tested. A step-by-step approach for filter efficiency testing is presented.
Cordierite silicon nitride filters
J. Sawyer; B. Buchan; R. Duiven; M. Berger; J. Cleveland; J. Ferri
1992-01-01
The objective of this project was to develop a silicon nitride based crossflow filter. This report summarizes the findings and results of the project. The project was phased with Phase I consisting of filter material development and crossflow filter design. Phase II involved filter manufacturing, filter testing under simulated conditions and reporting the results. In Phase I, Cordierite Silicon Nitride
1634 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 12, DECEMBER 1997 Dual Stack Filters, IEEE, and Charles A. Bouman, Member, IEEE Abstract--The theory of optimal stack filtering has been used filters. Under this condition, the stack filters obtained are duals of each other. Only one filter must
NASA Astrophysics Data System (ADS)
Onur Karsl?o?lu, Mahmut; Aghakarimi, Armin
2013-04-01
Ionosphere modeling is an important field of current studies because of its influences on the propagation of the electromagnetic signals. Among the various methods of obtaining ionospheric information, Global Positioning System (GPS) is the most prominent one because of extensive stations which are distributed all over the world. There are several studies in the literature related to the modeling of the ionosphere in terms of Total Electron Content (TEC). However, most of these studies investigate the ionosphere in the global and regional scales. On the other hand, complex dynamic of the ionosphere requires further studies in the local structure of the TEC distribution. In this work, Particle filter has been used for the investigation of the local character of the ionospheric Vertical Total Electron Content (VTEC). The GPS data of 29 ground based GPS stations, belonging to International GNSS Service (IGS) and Reference Frame Sub-commission for Europe (EUREF), for Europe have been used in this study. The data acquisition time is 18 February 2011 and the data is affected by the 15 February geomagnetic storm. In the preprocessing step, the observations of each satellite are examined for any possible cycle slip and also geometry-free linear combination of the observables are calculated for each continuous arc. Then, Pseudorange observations smoothed with the carrier to code leveling method. Particle filter is used for near-real time estimation of the VTEC and of the combined satellite and receiver biases. The Particle filter is implemented by recursively generating a set of weighted samples of the state variables. This filter has a flexible nature which can be more adaptive to some characteristics of the high dynamic systems. Besides, standard Kalman filter as an effective method for optimal state estimation is applied to the same data sets to compare the corresponding results with results of Particle filter. The comparison shows that Particle filter indicates better performance than the standard Kalman filter especially during the geomagnetic storm. Keywords: ionosphere, GPS, Kalman filter, Particle filer
Stanford University
Optimization Euclidean Distance Geometry 2 Moo Publishing #12;Meboo Publishing USA PO Box 12 Palo Alto, California 94302 Dattorro, Convex Optimization Euclidean Distance Geometry, second edition, Moo, v2015 but limited to personal use. 2005-2015 Moo Publishing USA #12;for Jennie Columba Antonio & Sze Wan #12;EDM
The use of filter media to determine filter cleanliness
NASA Astrophysics Data System (ADS)
Van Staden, S. J.; Haarhoff, J.
It is general believed that a sand filter starts its life with new, perfectly clean media, which becomes gradually clogged with each filtration cycle, eventually getting to a point where either head loss or filtrate quality starts to deteriorate. At this point the backwash cycle is initiated and, through the combined action of air and water, returns the media to its original perfectly clean state. Reality, however, dictates otherwise. Many treatment plants visited a decade or more after commissioning are found to have unacceptably dirty filter sand and backwash systems incapable of returning the filter media to a desired state of cleanliness. In some cases, these problems are common ones encountered in filtration plants but many reasons for media deterioration remain elusive, falling outside of these common problems. The South African conditions of highly eutrophic surface waters at high temperatures, however, exacerbate the problems with dirty filter media. Such conditions often lead to the formation of biofilm in the filter media, which is shown to inhibit the effective backwashing of sand and carbon filters. A systematic investigation into filter media cleanliness was therefore started in 2002, ending in 2005, at the University of Johannesburg (the then Rand Afrikaans University). This involved media from eight South African Water Treatment Plants, varying between sand and sand-anthracite combinations and raw water types from eutrophic through turbid to low-turbidity waters. Five states of cleanliness and four fractions of specific deposit were identified relating to in situ washing, column washing, cylinder inversion and acid-immersion techniques. These were measured and the results compared to acceptable limits for specific deposit, as determined in previous studies, though expressed in kg/m 3. These values were used to determine the state of the filters. In order to gain greater insight into the composition of the specific deposits stripped from the media, a four-point characterisation step was introduced for the resultant suspensions based on acid-solubility and volatility. Results showed that a reasonably effective backwash removed a median specific deposit of 0.89 kg/m 3. Further washing in a laboratory column removed a median specific deposit of 1.34 kg/m 3. Media subjected to a standardised cylinder inversion procedure removed a median specific deposit of 2.41 kg/m 3. Immersion in a strong acid removed a median specific deposit of 35.2 kg/m 3. The four-point characterisation step showed that the soluble-volatile fraction was consistently small in relation to the other fractions. The organic fraction was quite high at the RG treatment plant and the soluble-non-volatile fraction was particularly high at the BK treatment plant.
NASA Astrophysics Data System (ADS)
Marin-Franch, Antonio; Taylor, Keith; Cenarro, Javier; Cristobal-Hornillos, David; Moles, Mariano
2015-08-01
J-PAS (Javalambre-PAU Astrophysical Survey) is a Spanish-Brazilian collaboration to conduct a narrow-band photometric survey of 8500 square degrees of northern sky using an innovative filter system of 59 filters, 56 relatively narrow-band (FWHM=14.5 nm) filters continuously populating the spectrum between 350 to 1000nm in 10nm steps, plus 3 broad-band filters. This filter system will be able to produce photometric redshifts with a precision of 0.003(1 + z) for Luminous Red Galaxies, allowing J-PAS to measure the radial scale of the Baryonic Acoustic Oscillations. The J-PAS survey will be carried out using JPCam, a 14-CCD mosaic camera using the new e2v 9k-by-9k, 10?m pixel, CCDs mounted on the JST/T250, a dedicated 2.55m wide-field telescope at the Observatorio Astrofísico de Javalambre (OAJ) near Teruel, Spain. The filters will operate in a fast (f/3.6) converging beam. The requirements for average transmissions greater than 85% in the passband, <10-5 blocking from 250 to 1050nm, steep bandpass edges and high image quality impose significant challenges for the production of the J-PAS filters that have demanded the development of new design solutions. This talk presents the J-PAS filter system and describes the most challenging requirements and adopted design strategies. Measurements and tests of the first manufactured filters are also presented.
W. Brown; R. Crane
1969-01-01
Aspects of optimum filtering for complex valued random processes are presented. Ordinary linear filters are complemented with conjugate linear filters. It is found that the incorporation of conjugate linear filtering improves signal-to-noise ratio by a factor of two in matched filter receivers. For optimum least squares filtering the inclusion of conjugate processing reduces mean-square error by a factor as great
Genetically Engineered Microelectronic Infrared Filters
NASA Technical Reports Server (NTRS)
Cwik, Tom; Klimeck, Gerhard
1998-01-01
A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.
Classification aided cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2012-06-01
Target class measurements, if available from automatic target recognition systems, can be incorporated into multiple target tracking algorithms to improve measurement-to-track association accuracy. In this work, the performance of the classifier is modeled as a confusion matrix, whose entries are target class likelihood functions that are used to modify the update equations of the recently derived multiple models CPHD (MMCPHD) filter. The result is the new classification aided CPHD (CACPHD) filter. Simulations on multistatic sonar datasets with and without target class measurements show the advantage of including available target class information into the data association step of the CPHD filter.
Collier, J; Aldoohan, S; Gill, K
2014-06-01
Purpose: Reducing patient dose while maintaining (or even improving) image quality is one of the foremost goals in CT imaging. To this end, we consider the feasibility of optimizing CT scan protocols in conjunction with the application of different beam-hardening filtrations and assess this augmentation through noise-power spectrum (NPS) and detector quantum efficiency (DQE) analysis. Methods: American College of Radiology (ACR) and Catphan phantoms (The Phantom Laboratory) were scanned with a 64 slice CT scanner when additional filtration of thickness and composition (e.g., copper, nickel, tantalum, titanium, and tungsten) had been applied. A MATLAB-based code was employed to calculate the image of noise NPS. The Catphan Image Owl software suite was then used to compute the modulated transfer function (MTF) responses of the scanner. The DQE for each additional filter, including the inherent filtration, was then computed from these values. Finally, CT dose index (CTDIvol) values were obtained for each applied filtration through the use of a 100 mm pencil ionization chamber and CT dose phantom. Results: NPS, MTF, and DQE values were computed for each applied filtration and compared to the reference case of inherent beam-hardening filtration only. Results showed that the NPS values were reduced between 5 and 12% compared to inherent filtration case. Additionally, CTDIvol values were reduced between 15 and 27% depending on the composition of filtration applied. However, no noticeable changes in image contrast-to-noise ratios were noted. Conclusion: The reduction in the quanta noise section of the NPS profile found in this phantom-based study is encouraging. The reduction in both noise and dose through the application of beam-hardening filters is reflected in our phantom image quality. However, further investigation is needed to ascertain the applicability of this approach to reducing patient dose while maintaining diagnostically acceptable image qualities in a clinical setting.
INITIAL STEPS COMMITTEE REPORT &
Burg, Theresa
THE PEOPLE PLAN #12;INITIAL STEPS COMMITTEE REPORT & RECOMMENDATIONS ON A PEOPLE PLAN ................................................................................................................................................. 3 Mandate of the Initial Steps Task Force .....................................................................................................................................................16 #12;3 | P a g e INITIAL STEPS COMMITTEE REPORT & RECOMMENDATIONS ON A PEOPLE PLAN
Effects of electron beam irradiation of cellulose acetate cigarette filters
NASA Astrophysics Data System (ADS)
Czayka, M.; Fisch, M.
2012-07-01
A method to reduce the molecular weight of cellulose acetate used in cigarette filters by using electron beam irradiation is demonstrated. Radiation levels easily obtained with commercially available electron accelerators result in a decrease in average molecular weight of about six-times with no embrittlement, or significant change in the elastic behavior of the filter. Since a first step in the biodegradation of cigarette filters is reduction in the filter material's molecular weight this invention has the potential to allow the production of significantly faster degrading filters.
Step by Step Instructions Adding a Description of Work
Step by Step Instructions Adding a Description of Work Work Group Revision 1f, August 12, 2009. Step 1: Log In Step 2: Go to My Work Groups Step 3: Open Work Group Step 4: Update Description of Work Step 5: Go to Edit View Step 6: Edit the Tasks, Hazards, & Controls Step 7: Go to Preview Step 8
Design of binary serial-coded filters
NASA Astrophysics Data System (ADS)
Liu, Ying; Lu, Mingzhe; Zhang, Jianming; Fang, ZhiLiang; Liu, Fu-Lai; Mu, Guoguang
1994-03-01
Binary self-coded filters (BSCFs) are easily implemented optically. BSCFs are based on the optimization functions of the Hopfield model by nonsynchronous iterative neuron algorithms. Less filters are needed to carry out the same recognition task, compared with other methods. The error-tolerance ability is also very strong. All target objects can be correctively recognized when the characteristic codes are properly chosen. Starting from different initial states, we can obtain solution close to the overall optimum of the net.
Visual pattern recognition using coupled filters
NASA Astrophysics Data System (ADS)
Monroe, Stanley E., Jr.; Juday, Richard D.; Barton, R. Shane; Qin, Michael K.
1995-06-01
We discuss the use of an optical correlator with a highly coupled filter and dappled targets to track an object in a field of view cluttered by background noise and/or similar objects. The dappled targets are fractal images whose statistics are independent of scale. Each is unique for tracking the targets. We report the drop in correlation (hence recognition) of an object as a function of in-plane rotation and as a function of range. We discuss plans for an application in Johnson Space Center's Automation and Robotics group, in which correlation processing of these targets would distinguish an object and pass its position and orientation to a robot control system. Using MEDOF (minimum Euclidean distance optimal filter) to create filters on the coupled filter modulator, we show that background clutter can be optically filtered out.
Optimization of Pilot Point Locations: an efficient and geostatistical perspective
NASA Astrophysics Data System (ADS)
Mehne, J.; Nowak, W.
2012-04-01
The pilot point method is a wide-spread method for calibrating ensembles of heterogeneous aquifer models on available field data such as hydraulic heads. The pilot points are virtual measurements of conductivity, introduced as localized carriers of information in the inverse procedure. For each heterogeneous aquifer realization, the pilot point values are calibrated until all calibration data are honored. Adequate placement and numbers of pilot points are crucial both for accurate representation of heterogeneity and to keep the computational costs of calibration at an acceptable level. Current placement methods for pilot points either rely solely on the expertise of the modeler, or they involve computationally costly sensitivity analyses. None of the existing placement methods directly addressed the geostatistical character of the placement and calibration problem. This study presents a new method for optimal selection of pilot point locations. We combine ideas from Ensemble Kalman Filtering and geostatistical optimal design with straightforward optimization. In a first step, we emulate the pilot point method with a modified Ensemble Kalman Filter for parameter estimation at drastically reduced computational costs. This avoids the costly evaluation of sensitivity coefficients often used for optimal placement of pilot points. Second, we define task-driven objective functions for the optimal placement of pilot points, based on ideas from geostatistical optimal design of experiments. These objective functions can be evaluated at speed, without carrying out the actual calibration process, requiring nothing else but ensemble covariances that are available from step one. By formal optimization, we can find pilot point placement schemes that are optimal in representing the data for the task-at-hand with minimal numbers of pilot points. In small synthetic test applications, we demonstrate the promising computational performance and the geostatistically logical choice of pilot point locations. In comparison with a classical regularly spaced pilot point grid, we achieved an equally good calibration result with a drastically smaller number of pilot points (only 5%), promising a much faster performance of the pilot point method itself.
Modeling error analysis of stationary linear discrete-time filters
NASA Technical Reports Server (NTRS)
Patel, R.; Toda, M.
1977-01-01
The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.
A Branch-and-Bound Algorithm for Quadratically-Constrained Sparse Filter Design
Wei, Dennis
This paper presents an exact algorithm for sparse filter design under a quadratic constraint on filter performance. The algorithm is based on branch-and-bound, a combinatorial optimization procedure that can either guarantee ...
Rashmi Thakur; Dipayan Das; Apurba Das
2012-01-01
This review summarizes the research progress made so far on electret air filters used for separation of airborne particles from complex air stream. A set of different categories of these filters are delineated and the methods of manufacturing of these filters are described. The principles and mechanisms of filtration and modeling of pressure drop by these filters are analyzed. The
Not Available
1991-05-17
This report documents progress through May 16, 1990 in the marketing of the Mobile K' filter. This air filter traps fine particulates. A total number of 167 of the filter units have been sold. An effort to increase sales by lowering the cost of the units by delivering the filters unassembled is under way. (GHH)
Recirculating electric air filter
Bergman, Werner (Pleasanton, CA)
1986-01-01
An electric air filter cartridge has a cylindrical inner high voltage eleode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.
HEPA filter dissolution process
Brewer, K.N.; Murphy, J.A.
1994-02-22
A process is described for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal. 4 figures.
Recirculating electric air filter
Bergman, W.
1985-01-09
An electric air filter cartridge has a cylindrical inner high voltage electrode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.
Hepa filter dissolution process
Brewer, Ken N. (Arco, ID); Murphy, James A. (Idaho Falls, ID)
1994-01-01
A process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.
ERIC Educational Resources Information Center
Stille, J. K.
1981-01-01
Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)
Bourret, S.C.; Swansen, J.E.
1982-07-02
A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.
Adaptive filtering in biological signal processing.
Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A
1990-01-01
The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed. PMID:2180633
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
FILTER PAPER Microwave Sample Preparation Note App. Note: MS-9
Paytan, Adina
: This method provides for the acid digestion of cellulose acetate filter paper (Whatman #4) in a closed vessel, with a filtration step, if needed, prior to analysis. NOTE A: This procedure is a reference point for sample
Inverse Variation: Step By Step Lesson
NSDL National Science Digital Library
2012-08-29
This step by step lesson from the Math Ops website explains inverse variation. Students can read the text or follow along as it is read out loud. The lesson includes nine slides which explain what an inverse variation equation is, and include several real world examples of this type of mathematical model.
Effect of line broadening on the performance of Faraday filters
Zentile, Mark A; Whiting, Daniel J; Keaveney, James; Adams, Charles S; Hughes, Ifan G
2015-01-01
We show that homogeneous line broadening drastically affects the performance of atomic Faraday filters. We use a computerized optimization algorithm to find the best magnetic field and temperature for Faraday filters with a range of cell lengths. The effect of self-broadening is found to be particularly important for short vapour cells, and for `wing-type' filters. Experimentally we realize a Faraday filter using a micro-fabricated $^{87}$Rb vapour cell. By modelling the filter spectrum using the ElecSus program we show that additional homogeneous line broadening due to the background buffer-gas pressure must also be included for an accurate fit.
Analytical methods for performance evaluation of nonlinear filters.
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Sridhar, R.
1971-01-01
In the investigation, the filtering problem is considered in the continuous time domain. The postulated simple suboptimal nonlinear filter structure closely parallels the structure of the Kalman-Bucy optimal linear filter algorithm. Two filter performance evaluation methods are developed based on the Kolmogorov equations for the transition density of Markov processes. The expansions in the approximations for the nonlinear system and observation functions are in effect carried out up to second-order terms in both methods. The description of the filter's performance is sought in terms of second-order statistics in both methods.
Properties of multilayer filters
NASA Technical Reports Server (NTRS)
Baumeister, P. W.
1973-01-01
New methods were investigated of using optical interference coatings to produce bandpass filters for the spectral region 110 nm to 200 nm. The types of filter are: triple cavity metal dielectric filters; all dielectric reflection filters; and all dielectric Fabry Perot type filters. The latter two types use thorium fluoride and either cryolite films or magnesium fluoride films in the stacks. The optical properties of the thorium fluoride were also measured.
Ryo Okamoto; Jeremy L. O'Brien; Holger F. Hofmann; Tomohisa Nagata; Keiji Sasaki; Shigeki Takeuchi
2009-05-01
The ability to filter quantum states is a key capability in quantum information science and technology, in which one-qubit filters, or polarizers, have found wide application. Filtering on the basis of entanglement requires extension to multi-qubit filters with qubit-qubit interactions. We demonstrated an optical entanglement filter that passes a pair of photons if they have the desired correlations of their polarization. Such devices have many important applications to quantum technologies.
Reduction of turbidity by a coal-aluminium filter
Collins, A.G.; Johnson, R.L.
1985-06-01
Coal-aluminium granular filters successfully reduce turbidity in low-alkalinity raw waters to less than 1.0 ntu, without a coagulation step or external coagulant aids. Data from experiments conducted with control and pilot-plant filters show the viability of the process and indicate the turbidity and retention mechanisms. Operational characteristics of the process are similar to those of a conventional filter. The costs of the coal-aluminium process compare favourably with those of traditional treatment.
W-band microshield low-pass filters
Stephen V. Robertson; Linda P. B. Katehi; Gabriel M. Rebeiz
1994-01-01
Experimental and theoretical results are presented for a planar W-Band low-pass filter. A stepped impedance implementation of a 7-section 0.5 dB equal ripple Chebyshev filter achieves an insertion loss of 1 dB in the passband and a 90 GHz cutoff frequency. The filter is fabricated in microshield line technology, a new type of planar transmission line based on coplanar waveguide
Fuzzy adaptive filters, with application to nonlinear channel equalization
Li-Xin Wang; Jerry M. Mendel
1993-01-01
Two fuzzy adaptive filters are developed: one uses a recursive-least-squares (RLS) adaptation algorithm, and the other uses a least-mean-square (LMS) adaptation algorithm. The RLS fuzzy adaptive filter is constructed through the following four steps: (1) define fuzzy sets in the filter input space Rn whose membership functions cover U; (2) construct a set of fuzzy IF-THEN rules which either come
Stepped frequency ground penetrating radar
Vadnais, Kenneth G. (Ojai, CA); Bashforth, Michael B. (Buellton, CA); Lewallen, Tricia S. (Ventura, CA); Nammath, Sharyn R. (Santa Barbara, CA)
1994-01-01
A stepped frequency ground penetrating radar system is described comprising an RF signal generating section capable of producing stepped frequency signals in spaced and equal increments of time and frequency over a preselected bandwidth which serves as a common RF signal source for both a transmit portion and a receive portion of the system. In the transmit portion of the system the signal is processed into in-phase and quadrature signals which are then amplified and then transmitted toward a target. The reflected signals from the target are then received by a receive antenna and mixed with a reference signal from the common RF signal source in a mixer whose output is then fed through a low pass filter. The DC output, after amplification and demodulation, is digitized and converted into a frequency domain signal by a Fast Fourier Transform. A plot of the frequency domain signals from all of the stepped frequencies broadcast toward and received from the target yields information concerning the range (distance) and cross section (size) of the target.
Weighted Bloom Filter Jehoshua Bruck Jie Gao Anxiao (Andrew) Jiang
Bruck, Jehoshua (Shuki)
such as the step distribution or the Zipf's distribution, the improvement of the false positive probability bits are `1', then the query returns `yes'. A Bloom filter has no false positive but a low false, the false positive probability achieves its minimum value (1/2)k . A Bloom filter uses only a constant
Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.; Green, Richard; Bian, Fuyan; Strauss, Michael A.; Buck, Zoë; Annis, James; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon
2014-07-01
We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ?300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5? detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ?1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ?90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5? detection limits for point sources)
Crowdsourcing step-by-step information extraction to enhance existing how-to videos
Nguyen, Phu Tran
Millions of learners today use how-to videos to master new skills in a variety of domains. But browsing such videos is often tedious and inefficient because video player interfaces are not optimized for the unique step-by-step ...
Particle Filters with Approximation Steps Boris Oreshkin and Mark Coates
be a sequence of measurable spaces. The target state vector evolves according to a non- homogeneous (discrete and the generation of parametric mixture models. The main results of the paper are time-uniform bounds on the weak- imation error. We will motivate the theoretical analysis by considering the example of the "leader
Interacting Particle Filtering With Discrete Observations
Del Moral , Pierre
in the nonlinear filtering problem (in short NLF). That is, we want to find the one step predictor conditional the two types of NLF problems covered by our work. . Case A: The state signal (X n ) n#IN is an E A crucial practical advantage of the first category of NLF problems is that it leads to a natural IPS
Canonical Signed Digit Study. Part 2; FIR Digital Filter Simulation Results
NASA Technical Reports Server (NTRS)
Kim, Heechul
1996-01-01
Finite Impulse Response digital filter using Canonical Signed-Digit (CSD) number representation for the coefficients has been studied and its computer simulation results are presented here. Minimum Mean Square Error (MMSE) criterion is employed to optimize filter coefficients into the corresponding CSD numbers. To further improve coefficients optimization process, an extra non-zero bit is added for any filter coefficients exceeding 1/2. This technique improves frequency response of filter without increasing filter complexity almost at all. The simulation results show outstanding performance in bit-error-rate (BER) curve for all CSD implemented digital filters included in this presentation material.
NASA Technical Reports Server (NTRS)
Hampton, R. David; Whorton, Mark S.
2000-01-01
Many microgravity space-science experiments require active vibration isolation, to attain suitably low levels of background acceleration for useful experimental results. The design of state-space controllers by optimal control methods requires judicious choices of frequency-weighting design filters. Kinematic coupling among states greatly clouds designer intuition in the choices of these filters, and the masking effects of the state observations cloud the process further. Recent research into the practical application of H2 synthesis methods to such problems, indicates that certain steps can lead to state frequency-weighting design-filter choices with substantially improved promise of usefulness, even in the face of these difficulties. In choosing these filters on the states, one considers their relationships to corresponding design filters on appropriate pseudo-sensitivity- and pseudo-complementary-sensitivity functions. This paper investigates the application of these considerations to a single-degree-of-freedom microgravity vibration-isolation test case. Significant observations that were noted during the design process are presented. along with explanations based on the existent theory for such problems.
NASA Astrophysics Data System (ADS)
Bardakovski?, S. V.; Blinov, N. A.; Gorbachev, Yu P.; Tsygankov, V. M.; Koterov, V. N.; Krasovski?, V. M.; Lozinski?, Yu N.; Sakovets, S. V.; Statsura, A. Yu; Cheburkin, N. V.; Shchekotov, O. E.
1991-07-01
Theoretical and experimental investigations were made of the dynamics of an atmospheric-pressure pulsed CO2 laser with a self-filtering unstable resonator pumped by pulses of ~ 40 ?s duration. The resonator ensured that the divergence of the output radiation was close to the diffraction limit even when the active medium was so inhomogeneous that the output intensity suffered 100% modulation. The results demonstrated that the output characteristics could be improved by optimization of the resonator parameters.
A superior edge preserving filter with a systematic analysis
NASA Technical Reports Server (NTRS)
Holladay, Kenneth W.; Rickman, Doug
1991-01-01
A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data.
Neumaier, Arnold
Global optimization of mesh quality D. Eppstein, Meshing Roundtable 20011 Global Optimization;Global optimization of mesh quality D. Eppstein, Meshing Roundtable 20012 Introduction Mesh quality issues, meshing steps Connectivity optimization Delaunay triangulation, edge insertion Global point
A method for improving time-stepping numerics
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-04-01
In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.
Step by Step: Avoiding Spiritual Bypass in 12-Step Work
ERIC Educational Resources Information Center
Cashwell, Craig S.; Clarke, Philip B.; Graves, Elizabeth G.
2009-01-01
With spirituality as a cornerstone, 12-step groups serve a vital role in the recovery community. It is important for counselors to be mindful, however, of the potential for clients to be in spiritual bypass, which likely will undermine the recovery process.
HEPA Filter Vulnerability Assessment
GUSTAVSON, R.D.
2000-05-11
This assessment of High Efficiency Particulate Air (HEPA) filter vulnerability was requested by the USDOE Office of River Protection (ORP) to satisfy a DOE-HQ directive to evaluate the effect of filter degradation on the facility authorization basis assumptions. Within the scope of this assessment are ventilation system HEPA filters that are classified as Safety-Class (SC) or Safety-Significant (SS) components that perform an accident mitigation function. The objective of the assessment is to verify whether HEPA filters that perform a safety function during an accident are likely to perform as intended to limit release of hazardous or radioactive materials, considering factors that could degrade the filters. Filter degradation factors considered include aging, wetting of filters, exposure to high temperature, exposure to corrosive or reactive chemicals, and exposure to radiation. Screening and evaluation criteria were developed by a site-wide group of HVAC engineers and HEPA filter experts from published empirical data. For River Protection Project (RPP) filters, the only degradation factor that exceeded the screening threshold was for filter aging. Subsequent evaluation of the effect of filter aging on the filter strength was conducted, and the results were compared with required performance to meet the conditions assumed in the RPP Authorization Basis (AB). It was found that the reduction in filter strength due to aging does not affect the filter performance requirements as specified in the AB. A portion of the HEPA filter vulnerability assessment is being conducted by the ORP and is not part of the scope of this study. The ORP is conducting an assessment of the existing policies and programs relating to maintenance, testing, and change-out of HEPA filters used for SC/SS service. This document presents the results of a HEPA filter vulnerability assessment conducted for the River protection project as requested by the DOE Office of River Protection.
Cordierite silicon nitride filters
Sawyer, J.; Buchan, B. ); Duiven, R.; Berger, M. ); Cleveland, J.; Ferri, J. )
1992-02-01
The objective of this project was to develop a silicon nitride based crossflow filter. This report summarizes the findings and results of the project. The project was phased with Phase I consisting of filter material development and crossflow filter design. Phase II involved filter manufacturing, filter testing under simulated conditions and reporting the results. In Phase I, Cordierite Silicon Nitride (CSN) was developed and tested for permeability and strength. Target values for each of these parameters were established early in the program. The values were met by the material development effort in Phase I. The crossflow filter design effort proceeded by developing a macroscopic design based on required surface area and estimated stresses. Then the thermal and pressure stresses were estimated using finite element analysis. In Phase II of this program, the filter manufacturing technique was developed, and the manufactured filters were tested. The technique developed involved press-bonding extruded tiles to form a filter, producing a monolithic filter after sintering. Filters manufactured using this technique were tested at Acurex and at the Westinghouse Science and Technology Center. The filters did not delaminate during testing and operated and high collection efficiency and good cleanability. Further development in areas of sintering and filter design is recommended.
Statistical filter for image feature extraction.
Schau, H C
1980-07-01
Edge extraction techniques have become important as a preprocessing step in extraction of image features for the purpose of image segmentation, object identification, and bandwidth compression. The use of conventional edge extractors such as Sobel and Laplacian filters results in images that in many cases have a high degree of clutter due to the natural spatial texture of the scene background. To overcome this difficulty, a statistical filter has been developed that enhances local grey level activity around objects while reducing contributions due to background. The statistical filter is employed in a neighborhood modification process where the central pixel is replaced with the third central moment computed from the surrounding neighborhood.Choice of the third central moment is due in part to the fact that it is a function of the scene within the neighborhood rather than the power spectral density (Wiener spectrum) of the neighborhood. Application of the filter requires no prior knowledge, and pixels within the filter window may be chosen in random order due to the statistical nature of the operation. Results of the filter applied to IR images show performance comparable with, and in some cases superior to, the Sobel and Laplacian filters most commonly used for feature and edge extraction. PMID:20221205
Stack filters and the mean absolute error criterion
E. J. Coyle; J.-H. Lin
1988-01-01
A method to determine the stack filter which minimizes the mean absolute error between its output and a desired signal, given noisy observations of this desired signal, is presented. Specifically, an optimal window-width-b stack filter can be determined with a linear program with O(b2b) variables. This algorithm is efficient since the number of different inputs to a window-width-b filter is
Probabilistic Robotics Discrete Filters and Particle Filters
Kosecka, Jana
SA-1 Probabilistic Robotics Discrete Filters and Particle Filters Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book #12 to monitor whether the robot is de-localized or not. · To achieve this, one can consider the likelihood
Filter type gas sampler with filter consolidation
Miley, H.S.; Thompson, R.C.; Hubbard, C.W.; Perkins, R.W.
1997-03-25
Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, where after the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant. 5 figs.
Filter type gas sampler with filter consolidation
Miley, Harry S. (219 Rockwood Dr., Richland, WA 99352); Thompson, Robert C. (5313 Phoebe La., West Richland, WA 99352); Hubbard, Charles W. (1900 Stevens, Apt. 526, Richland, WA 99352); Perkins, Richard W. (1413 Sunset, Richland, WA 99352)
1997-01-01
Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, whereafter the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant.
POLYNOMIAL-BASED DIGITAL FILTERS AS PROTOTYPE FILTERS IN DFT MODULATED FILTER BANKS
Göckler, Heinz G.
POLYNOMIAL-BASED DIGITAL FILTERS AS PROTOTYPE FILTERS IN DFT MODULATED FILTER BANKS 1 Djordje Babic investigate the possibility to use polyno- mial-based digital FIR filters as prototype filters in DFT and cosine modulated filter banks. In order to apply the FIR filter with piecewise polynomial response
ERIC Educational Resources Information Center
Herman, Susan
1995-01-01
Aerobics instructors can use step aerobics to motivate students. One creative method is to add the step to the circuit workout. By incorporating the step, aerobic instructors can accommodate various fitness levels. The article explains necessary equipment and procedures, describing sample stations for cardiorespiratory fitness, muscular strength,…
Detection of Steps in Single Molecule Data
Aggarwal, Tanuj; Materassi, Donatello; Davison, Robert; Hays, Thomas; Salapaka, Murti
2013-01-01
Over the past few decades, single molecule investigations employing optical tweezers, AFM and TIRF microscopy have revealed that molecular behaviors are typically characterized by discrete steps or events that follow changes in protein conformation. These events, that manifest as steps or jumps, are short-lived transitions between otherwise more stable molecular states. A major limiting factor in determining the size and timing of the steps is the noise introduced by the measurement system. To address this impediment to the analysis of single molecule behaviors, step detection algorithms incorporate large records of data and provide objective analysis. However, existing algorithms are mostly based on heuristics that are not reliable and lack objectivity. Most of these step detection methods require the user to supply parameters that inform the search for steps. They work well, only when the signal to noise ratio (SNR) is high and stepping speed is low. In this report, we have developed a novel step detection method that performs an objective analysis on the data without input parameters, and based only on the noise statistics. The noise levels and characteristics can be estimated from the data providing reliable results for much smaller SNR and higher stepping speeds. An iterative learning process drives the optimization of step-size distributions for data that has unimodal step-size distribution, and produces extremely low false positive outcomes and high accuracy in finding true steps. Our novel methodology, also uniquely incorporates compensation for the smoothing affects of probe dynamics. A mechanical measurement probe typically takes a finite time to respond to step changes, and when steps occur faster than the probe response time, the sharp step transitions are smoothed out and can obscure the step events. To address probe dynamics we accept a model for the dynamic behavior of the probe and invert it to reveal the steps. No other existing method addresses the impact of probe dynamics on step detection. Importantly, we have also developed a comprehensive set of tools to evaluate various existing step detection techniques. We quantify the performance and limitations of various step detection methods using novel evaluation scales. We show that under these scales, our method provides much better overall performance. The method is validated on different simulated test cases, as well as experimental data. PMID:23956798
ERIC Educational Resources Information Center
Burt, David
1997-01-01
Presents responses to 10 common arguments against the use of Internet filters in libraries. Highlights include keyword blocking; selection of materials; liability of libraries using filters; users' judgments; Constitutional issues, including First Amendment rights; and censorship. (LRW)
HEPA filter monitoring program
NASA Astrophysics Data System (ADS)
Kirchner, K. N.; Johnson, C. M.; Aiken, W. F.; Lucerna, J. J.; Barnett, R. L.; Jensen, R. T.
1986-07-01
The testing and replacement of HEPA filters, widely used in the nuclear industry to purify process air, are costly and labor-intensive. Current methods of testing filter performance, such as differential pressure measurement and scanning air monitoring, allow determination of overall filter performance but preclude detection of incipient filter failure such as small holes in the filters. Using current technology, a continual in-situ monitoring system was designed which provides three major improvements over current methods of filter testing and replacement. The improvements include: cost savings by reducing the number of intact filters which are currently being replaced unnecessarily; more accurate and quantitative measurement of filter performance; and reduced personnel exposure to a radioactive environment by automatically performing most testing operations.
SOLUTION OF A GROUNDWATER CONTROL PROBLEM WITH IMPLICIT FILTERING \\Lambda
SOLUTION OF A GROUNDWATER CONTROL PROBLEM WITH IMPLICIT FILTERING \\Lambda A. BATTERMANN y , J. M an industrial site. Key words. Implicit filtering, Groundwater flow and transport, Optimal control, Parallel on a groundwater temperature control problem. This problem has some of the impor tant difficulties
Nonlinear bayesian filtering with applications to estimation and navigation
Lee, Deok-Jin
2005-08-29
In principle, general approaches to optimal nonlinear filtering can be described in a unified way from the recursive Bayesian approach. The central idea to this recur- sive Bayesian estimation is to determine the probability ...
NASA Technical Reports Server (NTRS)
Nagle, H. T., Jr.
1972-01-01
A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.
A finite dimensional filter with exponential conditional density
Brigo, Damiano
2009-01-01
In this paper we consider the continuous--time nonlinear filtering problem, which has an infinite--dimensional solution in general, as proved by Chaleyat--Maurel and Michel. There are few examples of nonlinear systems for which the optimal filter is finite dimensional, in particular Kalman's, Benes', and Daum's filters. In the present paper, we construct new classes of scalar nonlinear filtering problems admitting finite--dimensional filters. We consider a given (nonlinear) diffusion coefficient for the state equation, a given (nonlinear) observation function, and a given finite--dimensional exponential family of probability densities. We construct a drift for the state equation such that the resulting nonlinear filtering problem admits a finite--dimensional filter evolving in the prescribed exponential family augmented by the observaton function and its square.
Robust ensemble filtering and its relation to covariance inflation in the ensemble Kalman filter
Xiaodong Luo; Ibrahim Hoteit
2011-07-31
We propose a robust ensemble filtering scheme based on the $H_{\\infty}$ filtering theory. The optimal $H_{\\infty}$ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the $H_{\\infty}$ filter is more robust than the Kalman filter, in the sense that the estimation error in the $H_{\\infty}$ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the $H_{\\infty}$ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore we introduce a variant that solves some time-local constraints instead, and hence we call it the time-local $H_{\\infty}$ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), we also propose the concept of ensemble time-local $H_{\\infty}$ filter (EnTLHF). We outline the general form of the EnTLHF, and discuss some of its special cases. In particular, we show that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. We use some numerical examples to assess the relative robustness of the TLHF/EnTLHF in comparison with the corresponding KF/EnKF method.
Step-Optimized Particle Swarm Optimization Thomas Schoene
Ludwig, Simone
--Recent developments of Particle Swarm Optimiza- tion (PSO) have successfully trended towards Adaptive PSO (APSO). APSO and effectively. In classical PSO, all parameters remain constant for the entire swarm during the iterations PSO (SOPSO) algorithm in which every particle has its own velocity weights and an inner PSO iteration
Bins Zeng; Moncef Gabbouj; Yrjo Neuvo
1991-01-01
The authors present a unified method for designing optimal rank order filters (ROFs), stack filters, and generalized stack filters (GSFs) under the mean absolute error (MAE) criterion. The method is based on classical Bayes minimum-cost decision. Both the a priori and the a posteriori approaches are considered. It is shown that designing the minimum MAE stack filters and GSFs is
Benda, Jan
Fundamental filter properties of spiking neurons don't constrain detectability of communication-pass filter due to the interspike intervals limits information transmission at high frequencies (Knight, 1972). This low-pass filter might constrain the high-pass filter in optimally adapting its cutoff-frequency. First
Siler, J.L.; Poirier, M.R.; McCabe, D.J.; Hazen, T.C.
1991-01-01
Two significant problems have been identified during the first three years of operating the Savannah River Site Effluent Treatment Facility. These problems encompass two of the facility's major processing areas: the microfiltration and reverse osmosis steps. The microfilters (crossflow ceramic filters {minus}0.2{mu} nominal pore size) have been prone to pluggage problems. The presence of bacteria and bacteria byproducts in the microfilter feed, along with small quantities of colloidal iron, silica, and aluminum, results in a filter foulant that rapidly deteriorates filter performance and is difficult to remove by chemical cleaning. Processing rates through the filters have dropped from the design flow rate of 300 gpm after cleaning to 60 gpm within minutes. The combination of bacteria (from internal sources) and low concentrations of inorganic species resulted in substantial reductions in the reverse osmosis system performance. The salt rejection has been found to decrease from 99+% to 97%, along with a 50% loss in throughput, within a few hours of cleaning. Experimental work has led to implementation of several changes to plant operation and to planned upgrades of existing equipment. It has been shown that biological control in the influent is necessary to achieve design flowrates. Experiments have also shown that the filter performance can be optimized by the use of efficient filter backpulsing and the addition of aluminum nitrate (15 to 30 mg/L Al{sup 3+}) to the filter feed. The aluminum nitrate assists by controlling adsorption of colloidal inorganic precipitates and biological contaminants. In addition, improved cleaning procedures have been identified for the reverse osmosis units. This paper provides a summary of the plant problems and the experimental work that has been completed to understand and correct these problems.
Siler, J.L.; Poirier, M.R.; McCabe, D.J.; Hazen, T.C.
1991-12-31
Two significant problems have been identified during the first three years of operating the Savannah River Site Effluent Treatment Facility. These problems encompass two of the facility`s major processing areas: the microfiltration and reverse osmosis steps. The microfilters (crossflow ceramic filters {minus}0.2{mu} nominal pore size) have been prone to pluggage problems. The presence of bacteria and bacteria byproducts in the microfilter feed, along with small quantities of colloidal iron, silica, and aluminum, results in a filter foulant that rapidly deteriorates filter performance and is difficult to remove by chemical cleaning. Processing rates through the filters have dropped from the design flow rate of 300 gpm after cleaning to 60 gpm within minutes. The combination of bacteria (from internal sources) and low concentrations of inorganic species resulted in substantial reductions in the reverse osmosis system performance. The salt rejection has been found to decrease from 99+% to 97%, along with a 50% loss in throughput, within a few hours of cleaning. Experimental work has led to implementation of several changes to plant operation and to planned upgrades of existing equipment. It has been shown that biological control in the influent is necessary to achieve design flowrates. Experiments have also shown that the filter performance can be optimized by the use of efficient filter backpulsing and the addition of aluminum nitrate (15 to 30 mg/L Al{sup 3+}) to the filter feed. The aluminum nitrate assists by controlling adsorption of colloidal inorganic precipitates and biological contaminants. In addition, improved cleaning procedures have been identified for the reverse osmosis units. This paper provides a summary of the plant problems and the experimental work that has been completed to understand and correct these problems.
Novel Backup Filter Device for Candle Filters
Bishop, B.; Goldsmith, R.; Dunham, G.; Henderson, A.
2002-09-18
The currently preferred means of particulate removal from process or combustion gas generated by advanced coal-based power production processes is filtration with candle filters. However, candle filters have not shown the requisite reliability to be commercially viable for hot gas clean up for either integrated gasifier combined cycle (IGCC) or pressurized fluid bed combustion (PFBC) processes. Even a single candle failure can lead to unacceptable ash breakthrough, which can result in (a) damage to highly sensitive and expensive downstream equipment, (b) unacceptably low system on-stream factor, and (c) unplanned outages. The U.S. Department of Energy (DOE) has recognized the need to have fail-safe devices installed within or downstream from candle filters. In addition to CeraMem, DOE has contracted with Siemens-Westinghouse, the Energy & Environmental Research Center (EERC) at the University of North Dakota, and the Southern Research Institute (SRI) to develop novel fail-safe devices. Siemens-Westinghouse is evaluating honeycomb-based filter devices on the clean-side of the candle filter that can operate up to 870 C. The EERC is developing a highly porous ceramic disk with a sticky yet temperature-stable coating that will trap dust in the event of filter failure. SRI is developing the Full-Flow Mechanical Safeguard Device that provides a positive seal for the candle filter. Operation of the SRI device is triggered by the higher-than-normal gas flow from a broken candle. The CeraMem approach is similar to that of Siemens-Westinghouse and involves the development of honeycomb-based filters that operate on the clean-side of a candle filter. The overall objective of this project is to fabricate and test silicon carbide-based honeycomb failsafe filters for protection of downstream equipment in advanced coal conversion processes. The fail-safe filter, installed directly downstream of a candle filter, should have the capability for stopping essentially all particulate bypassing a broken or leaking candle while having a low enough pressure drop to allow the candle to be backpulse-regenerated. Forward-flow pressure drop should increase by no more than 20% because of incorporation of the fail-safe filter.
Genetic algorithm used in interference filter's design
NASA Astrophysics Data System (ADS)
Li, Jinsong; Fang, Ying; Gao, Xiumin
2009-11-01
An approach for designing of interference filter is presented by using genetic algorithm (here after refer to as GA) here. We use GA to design band stop filter and narrow-band filter. Interference filter designed here can calculate the optimal reflectivity or transmission rate. Evaluation function used in our genetic algorithm is different from the others before. Using characteristic matrix to calculate the photonic band gap of one-dimensional photonic crystal is similar to electronic structure of doped. If the evaluation is sensitive to the deviation of photonic crystal structure, the approach by genetic algorithm is effective. A summary and explains towards some uncompleted issues are given at the end of this paper.
M A Mitchell; W Bergman; J Haslam; E P Brown; S Sawyer; R Beaulieu; P Althouse; A Meike
2012-01-01
Potential benefits of ceramic filters in nuclear facilities: (1) Short term benefit for DOE, NRC, and industry - (a) CalPoly HTTU provides unique testing capability to answer questions for DOE - High temperature testing of materials, components, filter, (b) Several DNFSB correspondences and presentations by DNFSB members have highlighted the need for HEPA filter R and D - DNFSB Recommendation