For comprehensive and current results, perform a real-time search at Science.gov.

1

Optimal stochastic fault detection filter

A fault detection and identification algorithm, called optimal stochastic fault detection filter, is determined. The objective of the filter is to detect a single fault, called the target fault, and block other faults, called the nuisance faults, in the presence of the process and sensor noises. The filter is derived by maximizing the transmission from the target fault to the

Robert H. Chen; D. Lewis Mingori; Jason L. Speyer

2003-01-01

2

Optimal stochastic fault detection filter

Properties of the optimal stochastic fault detection filter for fault detection and identification are determined. The objective of the filter is to monitor certain faults called target faults and block other faults which are called nuisance faults. This filter is derived by keeping the ratio of the transmission from nuisance fault to the transmission from target fault small. It is

Robert H. Chen; Jason L. Speyer

1999-01-01

3

OPTIMIZATION OF ADVANCED FILTER SYSTEMS

Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based on the issues identified. The two advanced barrier filter systems have been found to have the potential to be significantly more reliable and less expensive to operate than standard ceramic candle filter system designs. Their key development requirements are the assessment of the design and manufacturing feasibility of the ceramic filter elements, and the small-scale demonstration of their conceptual reliability and availability merits.

R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

1998-04-30

4

Design of Optimal Digital Filters

NASA Astrophysics Data System (ADS)

Four methods for designing digital filters optimal in the Chebyshev sense are developed. The properties of these filters are investigated and compared. An analytic method for designing narrow-band FIR filters using Zolotarev polynomials, which are extensions of Chebyshev polynomials, is proposed. Bandpass and bandstop narrow-band filters as well as lowpass and highpass filters can be designed by this method. The design procedure, related formulae and examples are presented. An improved method of designing optimal minimum phase FIR filters by directly finding zeros is proposed. The zeros off the unit circle are found by an efficient special purpose root-finding algorithm without deflation. The proposed algorithm utilizes the passband minimum ripple frequencies to establish the initial points, and employs a modified Newton's iteration to find the accurate initial points for a standard Newton's iteration. The proposed algorithm can be used to design very long filters (L = 325) with very high stopband attenuations. The design of FIR digital filters in the complex domain is investigated. The complex approximation problem is converted into a near equivalent real approximation problem. A standard linear programming algorithm is used to solve the real approximation problem. Additional constraints are introduced which allow weighting of the phase and/or group delay of the approximation. Digital filters are designed which have nearly constant group delay in the passbands. The desired constant group delay which gives the minimum Chebyshev error is found to be smaller than that of a linear phase filter of the same length. These filters, in addition to having a smaller, approximately constant group delay, have better magnitude characteristics than exactly linear phase filters with the same length. The filters have nearly equiripple magnitude and group delay. The problem of IIR digital filter design in the complex domain is formulated such that the existence of best approximation is guaranteed. An efficient and numerically stable algorithm for the design is proposed. The methods to establish a good initial point are investigated. Digital filters are designed which have nearly constant group delay in the passbands. The magnitudes of the filter poles near the passband edge are larger than of those far from the passband edge. A delay overshooting may occur in the transition band (don't care region), and it can be reduced by decreasing the maximum allowed pole magnitude of the design problem at the expense of increasing the approximation error.

Chen, Xiangkun

5

Adaptive lattice noise canceller and optimal step size

The lattice-structured adaptive noise canceller has been studied: a brief formula about the relation of its misadjustment to the step size and the number of stages has been found theoretically and checked experimentally, showing that the misadjustment increases exponentially with the number of stages, an optimized step size has been applied to multi-stage lattice filter, with drastic reduction in convergence

Heping Ding; Chongzhi Yu

1986-01-01

6

Step-size control for acoustic echo cancellation filters - an overview

In this paper we present an overview about several approaches for controlling the step size for adaptive echo cancellation filters in hands-free telephones. First an optimal step size is derived. For the determination of this step size the power of a non-measurable signal has to be estimated. Detection and estimation methods for the determination of this power and for the

Andreas Mäder; Henning Puder; Gerhard Uwe Schmidt

2000-01-01

7

Distributed optimal fusion prior filter for systems with multiple packet dropouts

This paper is concerned with the optimal prior filtering problem for linear discrete-time stochastic systems with multiple packet dropouts and correlated noises. Firstly, based on a recent packet dropout model, a new unbiased optimal prior filter is developed in the linear minimum variance sense for a single sensor system. The prior filter is reduced to the standard Kalman one-step predictor

Ma Jing; Sun Shuli

2010-01-01

8

Optimal stochastic multiple-fault detection filter

A class of robust fault detection filters is generalized from detecting single fault to multiple faults. This generalization is called the optimal stochastic multiple-fault detection filter since in the formulation, the unknown fault amplitudes are modeled as white noise. The residual space of the filter is divided into several subspaces and each subspace is sensitive to only one fault (target

Robert H. Chen; Jason L. Speyer

1999-01-01

9

OPTIMIZATION OF ADVANCED FILTER SYSTEMS

Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program through cold flow and high-temperature testing. The Blasch, mullite-bonded alumina sheet filter element is the only candidate currently approaching qualification for demonstration, although this oxide-based, monolithic sheet filter element may be restricted to operating temperatures of 538 C (1000 F) or less. Many other types of ceramic and intermetallic sheet filter elements could be fabricated. The estimated capital cost of the sheet filter system is comparable to the capital cost of the standard candle filter system, although this cost estimate is very uncertain because the commercial price of sheet filter element manufacturing has not been established. The development of the sheet filter system could result in a higher reliability and availability than the standard candle filter system, but not as high as that of the inverted candle filter system. The sheet filter system has not reached the same level of development as the inverted candle filter system, and it will require more design development, filter element fabrication development, small-scale testing and evaluation before larger-scale testing could be recommended.

R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

2002-06-30

10

Optimal approximation algorithms for digital filter design

NASA Astrophysics Data System (ADS)

Several new algorithms are presented for the optimal approximation and design of various classes of digital filters. An iterative algorithm is developed for the efficient design of unconstrained and constrained infinite impulse response (IIR) digital filters. Both in the unconstrained and constrained cases, the numerator and denominator of the filter transfer function are designed iteratively by recourse to the Remez algorithm and to appropriate design parameters and criteria, at each iteration. This makes it possible for the algorithm to be implemented by means of a short main program which uses (at each iteration) the linear phase FIR filter design algorithm of McClellan et al. as a subroutine. The approach taken also permits the filter to be designed with a desired ripple ratio. Also, the algorithm determines automatically the minimum passband ripple corresponding to the prescribed orders and band edges of the filter. The filter is designed directly without guessing the passband ripple or stopband ripple.

Liang, J. K.

11

Steps Toward Optimal Competitive Scheduling

NASA Technical Reports Server (NTRS)

This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum ofsers preferences, when the preference values are private information of the users. We bypass this

Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

2006-01-01

12

Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

Boudet, Samuel; Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

2014-01-01

13

Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30?Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30?Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

2014-01-01

14

Optimal multiobjective design of digital filters using spiral optimization technique.

The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108

Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

2013-01-01

15

Optimal Approximation Algorithms for Digital Filter Design.

NASA Astrophysics Data System (ADS)

Several new algorithms are presented for the optimal approximation and design of various classes of digital filters. An iterative algorithm is developed for the efficient design of unconstrained and constrained infinite impulse response (IIR) digital filters. Both in the unconstrained and constrained cases, the numerator and denominator of the filter transfer function are designed iteratively by recourse to the Remez algorithm and to appropriate design parameters and criteria, at each iteration. This makes it possible for the algorithm to be implemented by means of a short main program which uses (at each iteration) the linear phase FIR filter design algorithm of McClellan et al. as a subroutine. The approach taken also permits the filter to be designed with a desired ripple ratio. Also, the algorithm determines automatically the minimum passband ripple corresponding to the prescribed orders and band edges of the filter. The filter is designed directly without guessing the passband ripple or stopband ripple. Another algorithm, based on similar principles, is developed for the design of a nonlinear phase finite impulse response (FIR) filter, whose transfer function optimally approximates a desired magnitude response, there being no constraints imposed on the phase response. A similar algorithm is presented for the design of two new classes of FIR digital filters, one linear phase and the other nonlinear phase. A filter of either class has significantly reduced number of multiplications compared to the one obtained by its conventional counterpart, with respect to a given frequency response. In the case of linear phase, by introducing the new class of digital filters into the design of multistage decimators and interpolators for narrow-band filter implementation, it is found that an efficient narrow-band filter requiring considerably lower multiplication rate than the conventional linear phase FIR design can be obtained. The amount of data storage required by the new class of nonlinear phase FIR filters is significantly less than its linear phase counterpart. Finally, the design of a (finite-impulse-response) FIR digital filter with some of the coefficients constrained to zero is formulated as a linear programming (LP) problem and the LP technique is then used to design this class of constrained FIR digital filters. . . . (Author's abstract exceeds stipulated maximum length. Discontinued here with permission of author.) UMI.

Liang, Junn-Kuen

16

Optimal design of active EMC filters

NASA Astrophysics Data System (ADS)

A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.

Chand, B.; Kut, T.; Dickmann, S.

2013-07-01

17

MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

NASA Technical Reports Server (NTRS)

The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.

Barton, R. S.

1994-01-01

18

Triple band microstrip tri-section bandpass filter using stepped impedance resonators (SIRs) is designed, simulated, built, and measured using hair pin structure. The complete design procedure is given from analytical stage to implementation stage with details The coupling between SIRs is investigated for the first time in detail by studying their effect on the filter characteristics including bandwidth, and attenuation to optimize the filter perfomance. The simulation of the filler is performed using method of moment based 2.5D planar electromagnetic simulator The filter is then implemented on RO4003 material and measured The simulation, and measured results are compared and found to be my close. The effect of coupling on the filter performance is then investigated using electromagnetic simulator It is shown that the coupling effect between SIRs can be used as a design knob to obtain a bandpass Idler with a better performance jar the desired frequency band using the proposed filter topology The results of this work can used in wireless communication systems where multiple frequency bandy are needed

Eroglu, Abdullah [ORNL

2010-01-01

19

ASYMPTOTICS OF OPTIMAL FILTERS 1 The Asymptotics of Optimal (Equiripple)

and ffi s has been a secret for more than twenty years. This paper is aimed to solve this mystery. We to replace Kaiser's empirical formula. Kaiser also discovered a nearly optimal family of filters based] for this family. The constant in the denominator becomes slightly smaller, which increases N . This family

Strang, Gilbert

20

Program Computes SLM Inputs To Implement Optimal Filters

NASA Technical Reports Server (NTRS)

Minimum Euclidean Distance Optimal Filter (MEDOF) program generates filters for use in optical correlators. Analytically optimizes filters on arbitrary spatial light modulators (SLMs) of such types as coupled, binary, fully complex, and fractional-2pi-phase. Written in C language.

Barton, R. Shane; Juday, Richard D.; Alvarez, Jennifer L.

1995-01-01

21

Design of Digital Filters and Filter Banks by Optimization: A State of the Art Review

Design of Digital Filters and Filter Banks by Optimization: A State of the Art Review W.-S. Lu as described below. For the sake of simplicity, we consider the problem of de- signing a linear-phase, lowpass

Lu, Wu-Sheng

22

Differential evolution particle swarm optimization for digital filter design

In this paper, swarm and evolutionary algorithms have been applied for the design of digital filters. Particle swarm optimization (PSO) and differential evolution particle swarm optimization (DEPSO) have been used here for the design of linear phase finite impulse response (FIR) filters. Two different fitness functions have been studied and experimented, each having its own significance. The first study considers

Bipul Luitel; Ganesh K. Venayagamoorthy

2008-01-01

23

NASA Astrophysics Data System (ADS)

In this paper, a novel hybrid algorithm featuring a simple index modulation profile with fast-converging optimization is proposed towards the design of dense wavelength-division-multiplexing systems (DWDM) multichannel fiber Bragg grating (FBG) filters. The approach is based on utilizing one of other FBG design approaches that may suffer from spectral distortion as the first step, then performing Lagrange multiplier optimization (LMO) for optimized correction of the spectral distortion. In our design examples, the superposition method is employed as the first design step for its merits of easy fabrication, and the discrete layer-peeling (DLP) algorithm is used to rapidly obtain the initial index modulation profiles for the superposition method. On account of the initially near-optimum index modulation profiles from the first step, the LMO optimization algorithm shows fast convergence to the target reflection spectra in the second step and the design outcome still retains the advantage of easy fabrication.

Hsin, Chen-Wei

2011-07-01

24

A hybrid method for optimization of the adaptive Goldstein filter

NASA Astrophysics Data System (ADS)

The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

2014-12-01

25

Probabilistic-based approach to optimal filtering

The signal-to-noise ratio maximizing approach in optimal filtering provides a robust tool to detect signals in the presence of colored noise. The method fails, however, when the data present a regimelike behavior. An approach is developed in this manuscript to recover local (in phase space) behavior in an intermittent regimelike behaving system. The method is first formulated in its general form within a Gaussian framework, given an estimate of the noise covariance, and demands that the signal corresponds to minimizing the noise probability distribution for any given value, i.e., on isosurfaces, of the data probability distribution. The extension to the non-Gaussian case is provided through the use of finite mixture models for data that show regimelike behavior. The method yields the correct signal when applied in a simplified manner to synthetic time series with and without regimes, compared to the signal-to-noise ratio approach, and helps identify the right frequency of the oscillation spells in the classical and variants of the Lorenz system. PMID:11088139

Hannachi

2000-04-01

26

Initial steps of inactivation at the K+ channel selectivity filter

K+ efflux through K+ channels can be controlled by C-type inactivation, which is thought to arise from a conformational change near the channel’s selectivity filter. Inactivation is modulated by ion binding near the selectivity filter; however, the molecular forces that initiate inactivation remain unclear. We probe these driving forces by electrophysiology and molecular simulation of MthK, a prototypical K+ channel. Either Mg2+ or Ca2+ can reduce K+ efflux through MthK channels. However, Ca2+, but not Mg2+, can enhance entry to the inactivated state. Molecular simulations illustrate that, in the MthK pore, Ca2+ ions can partially dehydrate, enabling selective accessibility of Ca2+ to a site at the entry to the selectivity filter. Ca2+ binding at the site interacts with K+ ions in the selectivity filter, facilitating a conformational change within the filter and subsequent inactivation. These results support an ionic mechanism that precedes changes in channel conformation to initiate inactivation. PMID:24733889

Thomson, Andrew S.; Heer, Florian T.; Smith, Frank J.; Hendron, Eunan; Bernèche, Simon; Rothberg, Brad S.

2014-01-01

27

Numerical Methods for Globally Optimal Adaptive IIR Filtering

This paper explores the potential for (i) developing globally optimal adaptive IIR filtering algorithms using numerical global optimization methods and (ii) proving absolute convergence of existing algorithms using analytical results available for these global optimization methods. The primary objective of this work is to overcome the performance losses incurred due to convergence to local minima or to suboptimal equation error

Virginia L. Stonick; S. T. Alexander

1990-01-01

28

Design of different types of digital FIR filter is of paramount significance in various Digital Signal Processing (DSP) applications. Different optimization techniques can judiciously be utilized to determine the impulse response coefficients of such a filter. These optimization techniques may include some conventional processes such as Convex or Non-convex optimization methods or some evolutionary algorithms such as Genetic Algorithm (GA),

S. Chattopadhyay; S. K. Sanyal; A. Chandra

2010-01-01

29

Optimized Kalman filter versus rigorous method in deformation analysis

NASA Astrophysics Data System (ADS)

Kalman filtering is a multiple-input, multiple-output filter that can optimally estimate the states of a system, and applicable for deformation analysis. The states are all the variables needed to completely describe the system behavior of the deformation process as a function of time (such as position, velocity etc.). The standard Kalman filter estimates the state vector where the measuring process is described by a linear system. In order to process a non-linear system an optimized aspect of Kalman filter is required. The main purpose of this research is to evaluate the optimized Kalman filter (OKF) as a non-robust method versus the iterative weighted similarity transformation (IWST) as a rigorous (also called robust) method. To satisfy this objective, first a detailed description on executing the optimized Kalman filter using the observation of angles and distances directly is provided. Then, 2-D total station data observations comprising distances and angles are used to demonstrate the OKF. For detecting the deformation, a real point-related test (single point test) is applied for every point as a local test. Consequently, the findings from OKF are compared and evaluated against the results from the IWST method. In general, the outcome of the Kalman filter algorithm is close to the preliminary results from the IWST method. The maximum and minimum differences in computed displacements are 0.2 and 2 millimeters respectively. Finally, Kalman filter approaches, having some properties, are recognized as suitable techniques for deformation analysis.

Aharizad, Nezhla; Setan, Halim; Lim, Mengchan

2012-11-01

30

Optimization of multiplexed holographic gratings for spectralspatial imaging filters

gratings and offer high angularspectral selectivity and the ability to multiplex multiple gratingsOptimization of multiplexed holographic gratings in PQ-PMMA for spectralspatial imaging filters transmittance filtering properties. We present the design and performance of angle-multiplexed holographic

Barton, Jennifer K.

31

Seeker Optimization Algorithm for Digital IIR Filter Design

Since the error surface of digital infinite-impulse-response (IIR) filters is generally nonlinear and multimodal, global optimization techniques are required in order to avoid local minima. In this paper, a seeker-optimization-algorithm (SOA)-based evolutionary method is proposed for digital IIR filter design. SOA is based on the concept of simulating the act of human searching in which the search direction is based

Chaohua Dai; Weirong Chen; Yunfang Zhu

2010-01-01

32

Optimization of tunable silicon compatible microring filters

Microring resonators can be used as pass-band filters for wavelength division demultiplexing in electronic-photonic integrated circuits for applications such as analog-to-digital converters (ADCs). For high quality signal ...

Amatya, Reja

2008-01-01

33

Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

Troncoso Romero, David Ernesto

2014-01-01

34

Quarter-Wave Stepped-Impedance Resonator Filters with Quadruplet and Canonical Form Responses

In this paper, compact microstrip quarter-wave stepped-impedance resonator (SIR) bandpass filters with quadruplet and canonical form responses are proposed. The proposed quadruplet filter can be designed to have a pair of transmission zeros to achieve sharp selectivity. In addition, by applying an extra source-load coupling, two additional transmission zeros on both side of passband are created to further enhance the

Jhe-Ching Lu; Chi-Yang Chang

2008-01-01

35

Design of optimal correlation filters for hybrid vision systems

NASA Technical Reports Server (NTRS)

Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

Rajan, Periasamy K.

1990-01-01

36

Optimal filtering methods to structural damage estimation under ground excitation.

This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

2013-01-01

37

Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation

This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

2013-01-01

38

Optimally smooth symmetric quadrature mirror filters for image coding

NASA Astrophysics Data System (ADS)

Symmetric quadrature mirror filters (QMFs) offer several advantages for wavelet-based image coding. Symmetry and odd-length contribute to efficient boundary handling and preservation of edge detail. Symmetric QMFs can be obtained by mildly relaxing the filter bank orthogonality conditions. We describe a computational algorithm for these filter banks which is also symmetric in the sense that the analysis and synthesis operations have identical implementations, up to a delay. The essence of a wavelet transform is its multiresolution decomposition, obtained by iterating the lowpass filter. This allows one to introduce a new design criterion, smoothness (good behavior) of the lowpass filter under iteration. This design constraint can be expressed solely in terms of the lowpass filter tap values (via the eigenvalue decomposition of a certain finite-dimensional matrix). Our innovation is to design near- orthogonal QMFs with linear-phase symmetry which are optimized for smoothness under iteration, not for stopband rejection. The new class of optimally smooth QMF filter banks yields high performance in a practical image compression system.

Heller, Peter N.; Shapiro, Jerome M.; Wells, Raymond O., Jr.

1995-04-01

39

Optimal Recursive Digital Filters for Active Bending Stabilization

NASA Technical Reports Server (NTRS)

In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

Orr, Jeb S.

2013-01-01

40

An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images.

A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3-D optimized blockwise version of the nonlocal (NL)-means filter (Buades, et al., 2005). The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2-D images, but reducing the computational burden is a critical aspect to extend the method to 3-D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: 1) an automatic tuning of the smoothing parameter; 2) a selection of the most relevant voxels; 3) a blockwise implementation; and 4) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb (Collins, et al., 1998). The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods [anisotropic diffusion (Perona and Malik, 1990)] and total variation minimization process (Rudin, et al., 1992) in terms of accuracy (measured by the peak signal-to-noise ratio) with low computation time. Finally, qualitative results on real data are presented . PMID:18390341

Coupe, P; Yger, P; Prima, S; Hellier, P; Kervrann, C; Barillot, C

2008-04-01

41

An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images

A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3D optimized blockwise version of the Non Local (NL) means filter [1]. The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2D images, but reducing the computational burden is a critical aspect to extend the method to 3D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully-automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: (a) an automatic tuning of the smoothing parameter, (b) a selection of the most relevant voxels, (c) a blockwise implementation and (d) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb [2]. The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods (Anisotropic Diffusion [3] and Total Variation minimization process [4]) in terms of accuracy (measured by the Peak Signal to Noise Ratio) with low computation time. Finally, qualitative results on real data are presented. PMID:18390341

Coupé, Pierrick; Yger, Pierre; Prima, Sylvain; Hellier, Pierre; Kervrann, Charles; Barillot, Christian

2008-01-01

42

Optimal Signal Processing of Frequency-Stepped CW Radar Data

NASA Technical Reports Server (NTRS)

An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

1995-01-01

43

Optimal Signal Processing of Frequency-Stepped CW Radar Data

NASA Technical Reports Server (NTRS)

An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

1995-01-01

44

NASA Astrophysics Data System (ADS)

The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

2009-12-01

45

AN ADAPTIVE PROJECTION ALGORITHM FOR MULTIRATE FILTER BANK OPTIMIZATION

to the nonquadratic nature of the cost function to be minimized, and accordingly non gradient algorithms may offer to the global minimum of the cost function, while at the same time avoiding potential local minima due to itsAN ADAPTIVE PROJECTION ALGORITHM FOR MULTIRATE FILTER BANK OPTIMIZATION Dong-Yan Huang and Phillip

Regalia, Phillip A.

46

Na-Faraday rotation filtering: The optimal point

Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

2014-01-01

47

Na-Faraday rotation filtering: The optimal point

NASA Astrophysics Data System (ADS)

Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing.

Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

2014-10-01

48

Two-step intensity modulated arc therapy (2-step IMAT) with segment weight and width optimization

Background 2-step intensity modulated arc therapy (IMAT) is a simplified IMAT technique which delivers the treatment over typically two continuous gantry rotations. The aim of this work was to implement the technique into a computerized treatment planning system and to develop an approach to optimize the segment weights and widths. Methods 2-step IMAT was implemented into the Prism treatment planning system. A graphical user interface was developed to generate the plan segments automatically based on the anatomy in the beam's-eye-view. The segment weights and widths of 2-step IMAT plans were subsequently determined in Matlab using a dose-volume based optimization process. The implementation was tested on a geometric phantom with a horseshoe shaped target volume and then applied to a clinical paraspinal tumour case. Results The phantom study verified the correctness of the implementation and showed a considerable improvement over a non-modulated arc. Further improvements in the target dose uniformity after the optimization of 2-step IMAT plans were observed for both the phantom and clinical cases. For the clinical case, optimizing the segment weights and widths reduced the maximum dose from 114% of the prescribed dose to 107% and increased the minimum dose from 87% to 97%. This resulted in an improvement in the homogeneity index of the target dose for the clinical case from 1.31 to 1.11. Additionally, the high dose volume V105 was reduced from 57% to 7% while the maximum dose in the organ-at-risk was decreased by 2%. Conclusions The intuitive and automatic planning process implemented in this study increases the prospect of the practical use of 2-step IMAT. This work has shown that 2-step IMAT is a viable technique able to achieve highly conformal plans for concave target volumes with the optimization of the segment weights and widths. Future work will include planning comparisons of the 2-step IMAT implementation with fixed gantry intensity modulated radiotherapy (IMRT) and commercial IMAT implementations. PMID:21631957

2011-01-01

49

Degeneracy, frequency response and filtering in IMRT optimization

NASA Astrophysics Data System (ADS)

This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques.

Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D.; Promberger, Claus

2004-07-01

50

Multidisciplinary Analysis and Optimization Generation 1 and Next Steps

NASA Technical Reports Server (NTRS)

The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.

Naiman, Cynthia Gutierrez

2008-01-01

51

Clever particle filters, sequential importance sampling and the optimal proposal

NASA Astrophysics Data System (ADS)

Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

Snyder, Chris

2014-05-01

52

FIR filter optimization for video processing on FPGAs

NASA Astrophysics Data System (ADS)

Two-dimensional finite impulse response (FIR) filters are an important component in many image and video processing systems. The processing of complex video applications in real time requires high computational power, which can be provided using field programmable gate arrays (FPGAs) due to their inherent parallelism. The most resource-intensive components in computing FIR filters are the multiplications of the folding operation. This work proposes two optimization techniques for high-speed implementations of the required multiplications with the least possible number of FPGA components. Both methods use integer linear programming formulations which can be optimally solved by standard solvers. In the first method, a formulation for the pipelined multiple constant multiplication problem is presented. In the second method, also multiplication structures based on look-up tables are taken into account. Due to the low coefficient word size in video processing filters of typically 8 to 12 bits, an optimal solution is found for most of the filters in the benchmark used. A complexity reduction of 8.5% for a Xilinx Virtex 6 FPGA could be achieved compared to state-of-the-art heuristics.

Kumm, Martin; Fanghänel, Diana; Möller, Konrad; Zipf, Peter; Meyer-Baese, Uwe

2013-12-01

53

Optimization of multiplierless two-dimensional digital filters

NASA Astrophysics Data System (ADS)

Circularly symmetric and diamond-shaped low-pass linear phase FIR filters are designed using coefficients comprising the sum or difference of two signed power-of-two (SPT) terms. A minimax error criterion is adopted in conjunction with an optimization process based on the use of genetic algorithms (GAs). The results presented are compared with those obtained using various other design methods, including simulated annealing, linear programming and simple rounding of an optimum (continuous) minimax solution. The filters designed using GAs exhibit superior performance to those designed using other methods.

Sriranganathan, S.; Bull, David R.; Redmill, David W.

1996-02-01

54

NASA Technical Reports Server (NTRS)

We propose an analytical design for a microstrip broadband spurious-suppression filter. The proposed design uses every section of the transmission lines as both a coupling and a spurious suppression element, which creates a very compact, planar filter. While a traditional filter length is greater than the multiple of the quarter wavelength at the center passband frequency (lambda(sub g)/4), the proposed filter length is less than (order n(Ssup th) + l)center dot lambda(sub g)/8. The filter s spurious response and physical dimension are controlled by the step impedance ratio (R) between two transmission line sections as a lambda(sub g)/4 resonator. The experimental result shows that, with R of 0.2, the out-of-band attenuation is greater than 40 dB; and the first spurious mode is shifted to more than 5 times the fundamental frequency. Moreover, it is the most compact planar filter design to date. The results also indicate a low in-band insertion loss.

U-Yen, Kongpop; Wollack, Edward J.; Doiron, Terence; Papapolymerou, John; Laskar, Joy

2005-01-01

55

Linear phase low pass FIR filter design using Improved Particle Swarm Optimization

In this paper, an optimal design of linear phase digital low pass finite impulse response (FIR) filter using Improved Particle Swarm Optimization (IPSO) has been presented. In the design process, the filter length, pass band and stop band frequencies, feasible pass band and stop band ripple sizes are specified. FIR filter design is a multi-modal optimization problem. The conventional gradient

Saptarshi Mukherjee; Rajib Kar; Durbadal Mandal; Sangeeta Mondal; S. P. Ghoshal

2011-01-01

56

A discrete particle swarm optimization technique (DPSO) for power filter design

In this paper, a novel optimization approach is developed to optimally solve the problem of power system shunt filter design based on discrete particle swarm optimization (DPSO) technique to ensure harmonic reduction and noise mitigation on the electrical utility grid. The proposed power filter design is based on the minimization of a multi objective function. The main power filter objective

Adel M. Sharaf; Adel A. A. El-Gammal

2009-01-01

57

This letter proposes an ultra wideband (UWB) bandpass filter (BPF) based on embedded stepped impedance resonators (SIRs). In this study, broad side coupled patches and high impedance microstrip lines are adopted as quasi-lumped elements for realizing the coupling between adjacent SIRs, which are used to suppress stopband harmonic response. An eight-pole UWB BPF is developed from lump-element bandpass prototype and

Zhang-Cheng Hao; Jia-Sheng Hong

2008-01-01

58

Folded Finite-Ground-Width CPW Quarter-Wave Stepped Impedance Resonator Filters

This paper proposes two new types of folded finite- ground-width CPW lambda\\/4 stepped impedance resonators (SIRs). The newly proposed CPW lambda\\/4 SIRs are much shorter than the conventional CPW lambda\\/4 SIR. Comparing to the multifold CPW lambda\\/4 UIR, the proposed resonators show much higher first spurious resonant frequency. Filters implemented with these newly proposed CPW lambda\\/4 SIRs depict not only

Chin-Hsuing Chen; Chi-Yang Chang

2007-01-01

59

Feature extraction is a critical step in real-time spike sorting after a spike is detected. Features should be informative and noise insensitive for high classification accuracy. This paper describes a new feature extraction method that utilizes a feature denoising filter to improve noise immunity while preserving spike information. Six features were extracted from filtered spikes, including a newly developed feature, and a separability index was applied to select optimal features. Using a set of the three highest-performing features, which includes the new feature, this method can achieve spike classification error as low as 5% for the worst case noise level of 0.2. The computational complexity is only 11% of principle component analysis method and it only costs nine registers per channel. PMID:25570192

Yuning Yang; Boling, Samuel; Eftekhar, Amir; Paraskevopoulou, Sivylla E; Constandinou, Timothy G; Mason, Andrew J

2014-08-01

60

Optimal subband Kalman filter for normal and oesophageal speech enhancement.

This paper presents the single channel speech enhancement system using subband Kalman filtering by estimating optimal Autoregressive (AR) coefficients and variance for speech and noise, using Weighted Linear Prediction (WLP) and Noise Weighting Function (NWF). The system is applied for normal and Oesophageal speech signals. The method is evaluated by Perceptual Evaluation of Speech Quality (PESQ) score and Signal to Noise Ratio (SNR) improvement for normal speech and Harmonic to Noise Ratio (HNR) for Oesophageal Speech (OES). Compared with previous systems, the normal speech indicates 30% increase in PESQ score, 4 dB SNR improvement and OES shows 3 dB HNR improvement. PMID:25227070

Ishaq, Rizwan; García Zapirain, Begoña

2014-01-01

61

Integration of optimized low-pass filters in band-pass filters for out-of-band improvement

We propose an original structure for the design of high performance filters with simultaneously controlled band-pass and band-reject responses. The band-reject response is controlled due to the integration of low-pass structure. Thus, the spurious resonances of the band-pass filter are rejected up to the low-pass filter ones. In this way, we have to optimize the response of the low-pass structure

CCdric QUENDO; C. Person; E. Rius; M. Ney

2001-01-01

62

A novel recursive scheme to compute the global and robust optimal variable fractional delay (VFD) filters based on the Particle Swarm Optimization (PSO) is developed in this paper. If the PSO is directly used to compute an optimal VFD filter the particles with high dimension might be yielded, which could require a long convergence time. Our recursive scheme invokes only

Dongyan Sun; Jiaxiang Zhao; Xiaoming Zhao

2009-01-01

63

Quantum demolition filtering and optimal control of unstable systems.

A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one. PMID:23091216

Belavkin, V P

2012-11-28

64

Neuromuscular fiber segmentation through particle filtering and discrete optimization

NASA Astrophysics Data System (ADS)

We present an algorithm to segment a set of parallel, intertwined and bifurcating fibers from 3D images, targeted for the identification of neuronal fibers in very large sets of 3D confocal microscopy images. The method consists of preprocessing, local calculation of fiber probabilities, seed detection, tracking by particle filtering, global supervised seed clustering and final voxel segmentation. The preprocessing uses a novel random local probability filtering (RLPF). The fiber probabilities computation is performed by means of SVM using steerable filters and the RLPF outputs as features. The global segmentation is solved by discrete optimization. The combination of global and local approaches makes the segmentation robust, yet the individual data blocks can be processed sequentially, limiting memory consumption. The method is automatic but efficient manual interactions are possible if needed. The method is validated on the Neuromuscular Projection Fibers dataset from the Diadem Challenge. On the 15 first blocks present, our method has a 99.4% detection rate. We also compare our segmentation results to a state-of-the-art method. On average, the performances of our method are either higher or equivalent to that of the state-of-the-art method but less user interactions is needed in our approach.

Dietenbeck, Thomas; Varray, François; Kybic, Jan; Basset, Olivier; Cachard, Christian

2014-03-01

65

Write Strategy Optimization Method with Two-Step Search for Blu-ray Disc Recording

A new write strategy (WS) optimization method with a two-step search process for Blu-ray Disc (BD) recording is developed to shorten the optimization time. This method is realized by the WS optimization system, which is constructed with an optical pickup, a disc tester, and the WS optimization algorithm. The optimization is executed according to the two-step search process along the

Nobuo Takeshita; Yusuke Kanatake; Tomo Kishigami; Koichi Ikuta

2010-01-01

66

NASA Astrophysics Data System (ADS)

This paper presents a triple-band bandpass filter for applications of GSM, WiMAX, and WLAN systems. The proposed filter comprises of the tri-section step-impedance and capacitively loaded step-impedance resonators, which are combined using the cross coupling technique. Additionally, tapered lines are used to connect at both ports of the filter in order to enhance matching for the tri-band resonant frequencies. The filter can operate at the resonant frequencies of 1.8 GHz, 3.7 GHz, and 5.5 GHz. At resonant frequencies, the measured values of S11 are -17.2 dB, -33.6 dB, and -17.9 dB, while the measured values of S21 are -2.23 dB, -2.98 dB, and -3.31 dB, respectively. Moreover, the presented filter has compact size compared with the conventional open-loop cross coupling triple band bandpass filters

Chomtong, P.; Akkaraekthalin, P.

2014-05-01

67

Design of waveguide filters by using genetically optimized frequency selective surfaces

A new optimization procedure suitable for the design of waveguide filters is presented. The filter structure consists of a frequency selective surface (FSS), placed on the transverse plane of a rectangular waveguide, so introducing a filtering behavior of the waveguide. Due to the boundary conditions imposed by the metallic waveguide walls, the FSS results to be infinite in extent, allowing

Agostino Monorchio; Giuliano Manara; Umberto Serra; Giovanni Marola; Enrico Pagana

2005-01-01

68

Optimal Filtering in Mass Transport Modeling From Satellite Gravimetry Data

NASA Astrophysics Data System (ADS)

Monitoring natural mass transport in the Earth's system, which has marked a new era in Earth observation, is largely based on the data collected by the GRACE satellite mission. Unfortunately, this mission is not free from certain limitations, two of which are especially critical. Firstly, its sensitivity is strongly anisotropic: it senses the north-south component of the mass re-distribution gradient much better than the east-west component. Secondly, it suffers from a trade-off between temporal and spatial resolution: a high (e.g., daily) temporal resolution is only possible if the spatial resolution is sacrificed. To make things even worse, the GRACE satellites enter occasionally a phase when their orbit is characterized by a short repeat period, which makes it impossible to reach a high spatial resolution at all. A way to mitigate limitations of GRACE measurements is to design optimal data processing procedures, so that all available information is fully exploited when modeling mass transport. This implies, in particular, that an unconstrained model directly derived from satellite gravimetry data needs to be optimally filtered. In principle, this can be realized with a Wiener filter, which is built on the basis of covariance matrices of noise and signal. In practice, however, a compilation of both matrices (and, therefore, of the filter itself) is not a trivial task. To build the covariance matrix of noise in a mass transport model, it is necessary to start from a realistic model of noise in the level-1B data. Furthermore, a routine satellite gravimetry data processing includes, in particular, the subtraction of nuisance signals (for instance, associated with atmosphere and ocean), for which appropriate background models are used. Such models are not error-free, which has to be taken into account when the noise covariance matrix is constructed. In addition, both signal and noise covariance matrices depend on the type of mass transport processes under investigation. For instance, processes of hydrological origin occur at short time scales, so that the input time series is typically short (1 month or less), which implies a relatively strong noise in the derived model. On the contrary, study of a long-term ice mass depletion requires a long time series of satellite data, which leads to a reduction of noise in the mass transport model. Of course, the spatial pattern (and therefore, the signal covariance matrices) of various mass transport processes are also very different. In the presented study, we compare various strategies to build the signal and noise covariance matrices in the context of mass transport modeling. In this way, we demonstrate the benefits of an accurate construction of an optimal filter as outlined above, compared to simplified strategies. Furthermore, we consider both models based on GRACE data alone and combined GRACE/GOCE models. In this way, we shed more light on a potential synergy of the GRACE and GOCE satellite mission. This is important nor only for the best possible mass transport modeling on the basis of all available data, but also for the optimal planning of future satellite gravity missions.

Ditmar, P.; Hashemi Farahani, H.; Klees, R.

2011-12-01

69

Correlation methods are becoming increasingly attractive tools for image recognition and location. This renewed interest in correlation methods is spurred by the availability of high-speed image processors and the emergence of correlation filter designs that can optimize relevant figures of merit. In this paper, a new correlation filter design method is presented that allows one to optimally tradeoff among potentially

B. V. K. Vijaya Kumar; Abhijit Mahalanobis; Alex Takessian

2000-01-01

70

Particle Swarm Optimization with Quantum Infusion for the design of digital filters

In this paper, particle swarm optimization with quantum infusion (PSO-QI) has been applied for the design of digital filters. In PSO-QI, Global best (gbest) particle (in PSO star topology) obtained from particle swarm optimization is enhanced by doing a tournament with an offspring produced by quantum behaved PSO, and selecting the winner as the new gbest. Filters are designed based

Bipul Luitel; Ganesh Kumar Venayagamoorthy

2008-01-01

71

Optimal linear filters are well known as a useful technique for processing extracellular recordings of neural activity. They can be tuned to respond only to a corresponding waveform template, while minimizing the energy of all other templates, and can be used to resolve spikes that are overlapping. The derivation of optimal linear multichannel filters goes back to , but the

Roland Vollgraf; Klaus Obermayer

2006-01-01

72

FIR Filter Design via Spectral Factorization and Convex Optimization

udio, spectrum shaping, ... ) upper bounds are convex in h; lower bounds are notMagnitude filter design problem involves magnitude specsClassical example: lowpass filter designlowpass filter with maximum stopband attenuation:521\\/51IS()l variables: h C R (filter coefficients),52 G R (stopband attenuation) parameters: 51 ( R (logarithmic passband ripple), n (order),Op (passband frequency), Os (stopband frequency)magnitude filter design problems are nonconvex can

Lieven Vandenberghe; Shao-po Wu; Stephen Boyd

1997-01-01

73

On an Optimal Number of Time Steps for a Sequential Solution of an Elliptic-Hyperbolic

, the hyperbolic saturation transport equation is solved over a certain period of time using a frozen DarcyOn an Optimal Number of Time Steps for a Sequential Solution of an Elliptic-Hyperbolic System) for the coupled system. We provide two procedures aimed at the estimation of an optimal set of time steps

74

Sparsity Optimization in Design of Multidimensional Filter Networks

sub-filters, which is the simplest network structure of those considered in this paper, ..... iteration, a new basis element is selected of the remaining columns of this matrix following a greedy prin- ...... filtering in magnetic resonance angiography.

2014-11-22

75

Optimizing spatial filters with kernel methods for BCI applications

NASA Astrophysics Data System (ADS)

Brain Computer Interface (BCI) is a communication or control system in which the user's messages or commands do not depend on the brain's normal output channels. The key step of BCI technology is to find a reliable method to detect the particular brain signals, such as the alpha, beta and mu components in EEG/ECOG trials, and then translate it into usable control signals. In this paper, our objective is to introduce a novel approach that is able to extract the discriminative pattern from the non-stationary EEG signals based on the common spatial patterns(CSP) analysis combined with kernel methods. The basic idea of our Kernel CSP method is performing a nonlinear form of CSP by the use of kernel methods that can efficiently compute the common and distinct components in high dimensional feature spaces related to input space by some nonlinear map. The algorithm described here is tested off-line with dataset I from the BCI Competition 2005. Our experiments show that the spatial filters employed with kernel CSP can effectively extract discriminatory information from single-trial EGOG recorded during imagined movements. The high recognition of linear discriminative rates and computational simplicity of "Kernel Trick" make it a promising method for BCI systems.

Zhang, Jiacai; Tang, Jianjun; Yao, Li

2007-11-01

76

NASA Astrophysics Data System (ADS)

This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

Singh, R.; Verma, H. K.

2013-12-01

77

Triple stage and single stage biotrickling filters (T-BTF and S-BTF) were operated with oxygenated liquid recirculation to enhance bio-desulfurization of biogas. Empty bed retention time (EBRT 100-180 s) and liquid recirculation velocity (q 2.4-7.1 m/h) were applied. H2S removal and sulfuric acid recovery increased with higher EBRT and q. But the highest q at 7.1 m/h induced large amount of liquid through the media, causing a reduction in bed porosity in S-BTF and H2S removal. Equivalent performance of S-BTF and T-BTF was obtained under the lowest loading of 165 gH2S/m(3)/h. In the subsequent continuous operation test, it was found that T-BTF could maintain higher H2S elimination capacity and removal efficiency at 175.6±41.6 gH2S/m(3)/h and 89.0±6.8% versus S-BTF at 159.9±42.8 gH2S/m(3)/h and 80.1±10.2%, respectively. Finally, the relationship between outlet concentration and bed height was modeled. Step feeding of oxygenated liquid recirculation in multiple stages clearly demonstrated an advantage for sulfide oxidation. PMID:25569031

Chaiprapat, Sumate; Charnnok, Boonya; Kantachote, Duangporn; Sung, Shihwu

2015-03-01

78

Optimal stability for trapezoidal-backward difference split-steps

The marginal stability of the trapezoidal method makes it dangerous to use for highly non-linear oscillations. Damping is provided by backward differences. The split-step combination (??t trapezoidal, (1 – ?)?t for BDF2) ...

Dharmaraja, Sohan

79

Full-Newton step polynomial-time methods for linear optimization based on locally

Interior-point methods (IPMs) for Linear Optimization (LO) and a large amount of results have On leave]. IPMs are among the most effective methods for solving wide classes of linear and nonlinear optimizationFull-Newton step polynomial-time methods for linear optimization based on locally self

Roos, Kees

80

Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization

Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization t i c l e i n f o Keywords: Fingerprint image generation Evolutionary algorithm Image filters Input pressure a b s t r a c t Constructing a fingerprint database is important to evaluate the performance

Cho, Sung-Bae

81

Optimization of the rolling-circle filter for Raman background subtraction.

A procedure is proposed to optimize a high-pass filter enabling one to subtract the broadband background signals inherent in Raman spectra. A spectral approach is used to analyze the characteristics of the filter and the distortions in the processed spectra. Examples of the processing of real spectra are presented. PMID:16608572

Brandt, N N; Brovko, O O; Chikishev, A Y; Paraschuk, O D

2006-03-01

82

Performance Optimization of a Photovoltaic Generator with an Active Power Filter Application

1 Performance Optimization of a Photovoltaic Generator with an Active Power Filter ApplicationP photovoltaic power stocks gain GPV photovoltaic generator h harmonic Range MPPT Maximum Power Point Tracking PV Generator with an Active Power Filter Application," International Journal on Engineering Applications, vol

Paris-Sud XI, UniversitÃ© de

83

Inertial measurement unit calibration using Full Information Maximum Likelihood Optimal Filtering

The robustness of Full Information Maximum Likelihood Optimal Filtering (FIMLOF) for inertial measurement unit (IMU) calibration in high-g centrifuge environments is considered. FIMLOF uses an approximate Newton's Method ...

Thompson, Gordon A. (Gordon Alexander)

2005-01-01

84

Bulk acoustic wave filters synthesis and optimization for multi-standard communication terminals.

This article presents a design methodology for bulk acoustic wave (BAW) filters. First, an overview of BAW physical principles, BAW filter synthesis, and the modified Butterworth-van Dyke model are addressed. Next, design and optimization methodology is presented and applied to a mixed ladder-lattice BAW bandpass filter for the Universal Mobile Telecommunications System (UMTS) TX-band at 1.95 GHz and to ladder and lattice BAW bandpass filters for the DCS1800 TX-band at 1.75 GHz. In each case, BAW filters are based on AlN resonators. UMTS filter is designed with conventional molybdenum electrodes whereas DCS filters electrodes are made with innovative iridium. PMID:20040426

Giraud, Sylvain; Bila, Stéphane; Chatras, Matthieu; Cros, Dominique; Aubourg, Michel

2010-01-01

85

Optimal Filters for High-Speed Compressive Detection in ...

Feb 28, 2013 ... With filters and exposure times fixed, we use the best linear unbiased estimator .... The nonnegativity of A implies that if there is a number C such that all ?? .... In Figure 2, we compare simulation using OB filters for 200µs and ...

2013-02-14

86

Optimal design of FIR digital filters with monotone passband response

The application of linear programming to the design of FIR digital filters with constraints on the derivative of the frequency response is described. Numerical considerations in the implementation are discussed and a program is given with examples for the design of filters with optional monotone response in passbands. The method provides the user with an additional degree of flexibility over

K. Steiglitz

1979-01-01

87

Optimal LS IIR filter design for music analysis\\/synthesis

Addresses the design of fixed, low-order infinite impulse response (IIR) filters for modeling the perceptually significant features of the spectra of string instrument bodies. The problem is stated mathematically, and the design methodologies compared here are reviewed. The experimental results are presented. The experimental set-up, data acquisition and data preprocessing are described. The spectra of the IIR filters designed using

V. L. Stonick; Dana Massie

1992-01-01

88

Fibonacci sequence, golden section, Kalman filter and optimal control

A connection between the Kalman filter and the Fibonacci sequence is developed. More precisely it is shown that, for a scalar random walk system in which the two noise sources (process and measurement noise) have equal variance, the Kalman filter's estimate turns out to be a convex linear combination of the a priori estimate and of the measurements with coefficients

Alessio Benavoli; Luigi Chisci; Alfonso Farina

2009-01-01

89

Evaluation of effective energy using radiochromic film and a step-shaped aluminum filter.

Although the half-value layer (HVL) is one of the important parameters for quality assurance (QA) and quality control (QC), constant monitoring has not been performed because measurements using an ionization chamber (IC) are time-consuming and complicated. To solve these problems, a method using radiochromic film and step-shaped aluminum (Al) filters has been developed. To this end, GAFCHROMIC EBT2 dosimetry film (GAF-EBT2), which shows only slight energy dependency errors in comparison with GAFCHROMIC XR TYPE-R (GAF-R) and other radiochromic films, has been used. The measurement X-ray tube voltages were 120, 100, and 80 kV. GAF-EBT2 was scanned using a flat-bed scanner before and after exposure. To remove the non-uniformity error caused by image acquisition of the flat-bed scanner, the scanning image of the GAF-EBT2 before exposure was subtracted after exposure. HVL was evaluated using the density attenuation ratio. The effective energies obtained using HVLs of GAF-EBT2, GAF-R, and an IC dosimeter were compared. Effective energies with X-ray tube voltages of 120, 100, and 80 kV using GAF-EBT2 were 40.6, 36.0, and 32.9 keV, respectively. The difference ratios of the effective energies using GAF-EBT2 and the IC were 5.0%, 0.9%, and 2.7%, respectively. GAF-EBT2 and GAF-R proved to be capable of measuring effective energy with comparable precision. However, in HVL measurements of devices operating in the high-energy range (X-ray CT, radiotherapy machines, and so on), GAF-EBT2 was found to offer higher measurement precision than GAF-R, because it shows only a slight energy dependency. PMID:21437731

Gotanda, T; Katsuda, T; Gotanda, R; Tabuchi, A; Yamamoto, K; Kuwano, T; Yatake, H; Kashiyama, K; Yabunaka, K; Akagawa, T; Takeda, Y

2011-06-01

90

Design of optimal finite wordlength FIR digital filters using integer programming techniques

The application of a general-purpose integer-programming computer program to the design of optimal finite wordlength FIR digital filters is described. Examples of two optimal low-pass FIR finite wordlength filters are given and the results are compared with the results obtained by rounding the infinite wordlength coefficients. An analysis of the approach based on the results of more than 50 design

DUSAN M. KODEK

1980-01-01

91

We propose to characterize various coding domains for the joint transform correlator. To achieve that, optimal trade- off filters have ben computed and then optimally constrained to given coding domains with an algorithm we have developed. Then, these coding domains have been evaluated in relation to the trade-offs they achieve.

Laurent Bigue; Michel Fraces; Pierre Ambs

1996-01-01

92

Optease Vena Cava Filter Optimal Indwelling Time and Retrievability

The purpose of this study was to assess the indwelling time and retrievability of the Optease IVC filter. Between 2002 and 2009, a total of 811 Optease filters were inserted: 382 for prophylaxis in multitrauma patients and 429 for patients with venous thromboembolic (VTE) disease. In 139 patients [97 men and 42 women; mean age, 36 (range, 17-82) years], filter retrieval was attempted. They were divided into two groups to compare change in retrieval policy during the years: group A, 60 patients with filter retrievals performed before December 31 2006; and group B, 79 patients with filter retrievals from January 2007 to October 2009. A total of 128 filters were successfully removed (57 in group A, and 71 in group B). The mean filter indwelling time in the study group was 25 (range, 3-122) days. In group A the mean indwelling time was 18 (range, 7-55) days and in group B 31 days (range, 8-122). There were 11 retrieval failures: 4 for inability to engage the filter hook and 7 for inability to sheathe the filter due to intimal overgrowth. The mean indwelling time of group A retrieval failures was 16 (range, 15-18) days and in group B 54 (range, 17-122) days. Mean fluoroscopy time for successful retrieval was 3.5 (range, 1-16.6) min and for retrieval failures 25.2 (range, 7.2-62) min. Attempts to retrieve the Optease filter can be performed up to 60 days, but more failures will be encountered with this approach.

Rimon, Uri, E-mail: rimonu@sheba.health.gov.il; Bensaid, Paul, E-mail: paulbensaid@hotmail.com; Golan, Gil, E-mail: gilgolan201@gmail.com; Garniek, Alexander, E-mail: garniek@gmail.com; Khaitovich, Boris, E-mail: borislena@012.net.il [Chaim Sheba Medical Center (Affiliated to the Sackler School of Medicine, Tel-Aviv University, Tel-Aviv), Department of Diagnostic Imaging (Israel); Dotan, Zohar, E-mail: Zohar.Dotan@sheba.health.gov.il [Chaim Sheba Medical Center (Affiliated to the Sackler School of Medicine, Tel-Aviv University, Tel-Aviv), Department of Urology (Israel); Konen, Eli, E-mail: Eli.Konen@sheba.health.gov.il [Chaim Sheba Medical Center (Affiliated to the Sackler School of Medicine, Tel-Aviv University, Tel-Aviv), Department of Diagnostic Imaging (Israel)

2011-06-15

93

Optimal filter in the frequency-time mixed domain to extract moving object

NASA Astrophysics Data System (ADS)

There are same occasions to extract the moving object from image sequence in the region of remote sensing, robot vision and so on. The process needs to have high accurate extraction and simpler realization. In this paper, we propose the design method of the optimal filter in the frequency-time mixed domain. Frequency selective filter to dynamic images usually are designed in 3-D frequency domain. But, design method of the filter is difficult because of its high parameter degree. By the use of frequency-time mixed domain(MixeD) which constitutes of 2-D frequency domain and 1-D time domain, design of filters becomes easier. But usually the desired and noise frequency component of image tend to concentrate near the origin in the frequency domain. Therefore, conventional frequency selective filters are difficult to distinguish these. We propose the optimal filter in the MixeD in the sense of least mean square error. First of all, we apply 2-D spatial Fourier to dynamic images, and at each point in 2-D frequency domain, designed FIR filtering is applied to 1-D time signal. In designing the optimal filter, we use the following information to decide the characteristics of the optimal filter. (1) The number of finite frames of input images. (2) The velocity vector of the signal desired. (3) The power spectrum of the noise signal. Signals constructed by these information are applied for the evaluation function and it decides filter coefficients. After filtering, 2-D inverse Fourier transform is applied to obtain the extracted image.

Shinmura, Hideyuki; Hiraoka, Kazuhiro; Hamada, Nozomu

2000-12-01

94

1-4244-0387-1/06/$20.00 2006 IEEE APCCAS 2006 Design of Optimal Decimation and Interpolation Filters

1-4244-0387-1/06/$20.00 ©2006 IEEE APCCAS 2006 Design of Optimal Decimation and Interpolation for the design of optimal decimation and interpolation filters that can be utilized in a TEMG type system with a decimation filter and an interpolation filter. Simulation results are presented to demonstrate

Lu, Wu-Sheng

95

NASA Astrophysics Data System (ADS)

A new technique for reliably identifying point sources in millimeter/submillimeter wavelength maps is presented. This method accounts for the frequency dependence of noise in the Fourier domain as well as nonuniformities in the coverage of a field. This optimal filter is an improvement over commonly-used matched filters that ignore coverage gradients. Treating noise variations in the Fourier domain as well as map space is traditionally viewed as a computationally intensive problem. We show that the penalty incurred in terms of computing time is quite small due to casting many of the calculations in terms of FFTs and exploiting the absence of sharp features in the noise spectra of observations. Practical aspects of implementing the optimal filter are presented in the context of data from the AzTEC bolometer camera. The advantages of using the new filter over the standard matched filter are also addressed in terms of a typical AzTEC map.

Perera, T. A.; Wilson, G. W.; Scott, K. S.; Austermann, J. E.; Schaar, J. R.; Mancera, A.

2013-07-01

96

GSVD-based optimal filtering for single and multimicrophone speech enhancement

A generalized singular value decomposition (GSVD) based algorithm is proposed for enhancing multimicrophone speech signals degraded by additive colored noise. This GSVD-based multimicrophone algorithm can be considered to be an extension of the single-microphone signal subspace algorithms for enhancing noisy speech signals and amounts to a specific optimal filtering problem when the desired response signal cannot be observed. The optimal

Simon Doclo; Marc Moonen

2002-01-01

97

Write Strategy Optimization Method with Two-Step Search for Blu-ray Disc Recording

NASA Astrophysics Data System (ADS)

A new write strategy (WS) optimization method with a two-step search process for Blu-ray Disc (BD) recording is developed to shorten the optimization time. This method is realized by the WS optimization system, which is constructed with an optical pickup, a disc tester, and the WS optimization algorithm. The optimization is executed according to the two-step search process along the mathematical axis, which is experimentally derived from the sample WS parameters. As the experimental result, the optimization time is reduced by nearly two-thirds from that achieved using the conventional method performed by the experts. All the jitter values of the playback signal derived from the recorded marks are smaller than the 7% target value and the effectiveness of this new method is experimentally confirmed.

Takeshita, Nobuo; Kanatake, Yusuke; Kishigami, Tomo; Ikuta, Koichi

2010-08-01

98

This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

Sun, W Y [Lawrence Berkeley Lab., CA (United States)

1993-04-01

99

An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm(-1)) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm(-1). Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

Liu, Jui-Nung; Schulmerich, Matthew V; Bhargava, Rohit; Cunningham, Brian T

2011-11-21

100

An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm?1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm?1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.

2011-01-01

101

DETECTING CANDIDATE COSMIC BUBBLE COLLISIONS WITH OPTIMAL FILTERS

-up analysis. 1 Introduction The standard CDM concordance cosmological model is now well supported(ÂµK) (a) Radial profile (b) Signature on the sphere (c) Matched filter Figure 1: Panels (a) and (b) show the radial profile and spherical plot, respectively, of a bubble collision signature with parameters {z0

McEwen, Jason

102

NASA Astrophysics Data System (ADS)

Facial recognition is a difficult task due to variations in pose and facial expressions, as well as presence of noise and clutter in captured face images. In this work, we address facial recognition by means of composite correlation filters designed with multi-objective combinatorial optimization. Given a large set of available face images having variations in pose, gesticulations, and global illumination, a proposed algorithm synthesizes composite correlation filters by optimization of several performance criteria. The resultant filters are able to reliably detect and correctly classify face images of different subjects even when they are corrupted with additive noise and nonhomogeneous illumination. Computer simulation results obtained with the proposed approach are presented and discussed in terms of efficiency in face detection and reliability of facial classification. These results are also compared with those obtained with existing composite filters.

Cuevas, Andres; Diaz-Ramirez, Victor H.; Kober, Vitaly; Trujillo, Leonardo

2014-09-01

103

A non-linear optimal predictive control of a shunt active power filter

In this paper a nonlinear multiple-input multiple-output (MIMO) predictive control using optimal control approach is applied to control the currents of a three-phase three-wire voltage source inverter used as a shunt active power filter (AF). The nonlinear active filter state-space representation model is elaborated in the synchronous d-q frame rotating at the mains fundamental frequency. This state-space model is seen

Nassar Mendalek; Farhat Fnaiech; Kamal Al-Haddad; L.-A. Dessaint

2002-01-01

104

Filter design via inner--outer factorization: Comments on ``Optimal deconvolution filter

, the previously complicated step of finding f\\Deltag + can now be automatized. In [5], Chen and Peng suggest. As noted above, evaluation of causal brackets f\\Deltag + corresponds to the solution of a Diophantine

105

Design, optimization and fabrication of an optical mode filter for integrated optics.

We present the design, optimization, fabrication and characterization of an optical mode filter, which attenuates the snaking behavior of light caused by a lateral misalignment of the input optical fiber relative to an optical circuit. The mode filter is realized as a bottleneck section inserted in an optical waveguide in front of a branching element. It is designed with Bézier curves. Its effect, which depends on the optical state of polarization, is experimentally demonstrated by investigating the equilibrium of an optical splitter, which is greatly improved however only in TM mode. The measured optical losses induced by the filter are 0.28 dB. PMID:19399117

Magnin, Vincent; Zegaoui, Malek; Harari, Joseph; François, Marc; Decoster, Didier

2009-04-27

106

Optimal Weights Mixed Filter for Removing Mixture of Gaussian and Impulse Noises

According to the character of Gaussian, we modify the Rank-Ordered Absolute Differences (ROAD) to Rank-Ordered Absolute Differences of mixture of Gaussian and impulse noises (ROADG). It will be more effective to detect impulse noise when the impulse is mixed with Gaussian noise. Combining rightly the ROADG with Optimal Weights Filter (OWF), we obtain a new method to deal with the mixed noise, called Optimal Weights Mixed Filter (OWMF). The simulation results show that the method is effective to remove the mixed noise.

Jin, Qiyu; Liu, Quansheng

2012-01-01

107

Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

Safak, Erdal

1989-01-01

108

Implementation of high step-up solar power optimizer for DC micro grid application

This paper proposed a novel high step-up solar power optimizer (SPO), which efficiently harvests maximum energy from photovoltaic (PV) panel then output to DC micro-grid. It combines coupled inductor and switched-capacitor technologies to achieve high step-up voltage gain. The leakage inductance energy of the coupled inductor can be recycled to reduce the voltage stress and power losses. Therefore, low voltage

Shih-Ming Chen; Ke-Ren Hu; Tsorng-Juu Liang; Yi-Hsun Hsieh; Lung-Sheng Yang

2012-01-01

109

In this paper we modify the original primal-dual interior-point filter method proposed in (18) for the solution of nonlinear programming problems. We introduce two new optimality filter entries based on the objective function, and thus better suited for the purposes of minimization, and propose conditions for using inexact Hessians. We show that the global convergence properties of the method remain

RENATA SILVA; MICHAEL ULBRICH; STEFAN ULBRICH; N. VICENTE

110

Using the innovation analysis method in the time domain, based on the autoregressive moving average (ARMA) innovation model, this paper presents a unified white noise estimation theory that includes both input and measurement white noise estimators, and presents a new steady-state optimal state estimation theory. Non-recursive optimal state estimators are given, whose recursive version gives a steady-state Kalman filter, where

Zi-Li Deng; Huan-Shui Zhang; Shu-Jun Liu; Lu Zhou

1996-01-01

111

Comparison of optimal and local search methods for designing finite wordlength FIR digital filters

This paper presents a comparison between an optimal (branch-and-bound) algorithm and a suboptimal (local search) algorithm for the design of finite wordlength finite-impulse-response (FIR) digital filters. Experimental results are described for 11 examples of length 15 to 35. It is concluded that when computer resources are not available for the optimal method, it is still worth applying the local search

D. Kodek; K. Steiglitz

1981-01-01

112

Dual-mode stepped-impedance ring resonator for bandpass filter applications

It is well known that two orthogonal resonant modes exist within a one-wavelength ring resonator. In this paper, we focus on a ring resonator possessing an impedance step as a form of perturbation. A convenient analyzing method for obtaining the resonance characteristics of this resonator structure is presented. Furthermore, generation of attenuation poles obtained by the dual-mode ring resonator is

Michiaki Matsuo; Hiroyuki Yabuki; Mitsuo Makimoto

2001-01-01

113

NASA Technical Reports Server (NTRS)

Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

Zaychik, Kirill B.; Cardullo, Frank M.

2012-01-01

114

Choquet Integrals and OWA Criteria as a Natural (and Optimal) Next Step After Linear

Choquet Integrals and OWA Criteria as a Natural (and Optimal) Next Step After Linear Aggregation,mceberio,vladik}@utep.edu Abstract. In areas ranging from multicriteria decision making to multi agent decision making. The simplest way to combine these criteria is to use linear aggregation. In many practi cal situations, linear

Ward, Karen

115

Choquet Integrals and OWA Criteria as a Natural (and Optimal) Next Step After Linear

Choquet Integrals and OWA Criteria as a Natural (and Optimal) Next Step After Linear Aggregation,mceberio,vladik}@utep.edu Abstract. In areas ranging from multi-criteria decision making to multi- agent decision making. The simplest way to combine these criteria is to use linear aggregation. In many practi- cal situations, linear

Ward, Karen

116

In a recent paper a novel approach was presented for the restoration of canonical signed-digit (CSD) numbers to their correct format after the application of crossover and mutation operations in genetic algorithms. This paper is concerned with the development of a new technique for the optimization of FIR digital filters over the CSD coefficient space based on genetic algorithms. This

A. T. G. Fuller; B. Nowrouzian; F. Ashrafzadeh

1998-01-01

117

Approximate String Membership Checking: A Multiple Filter, Optimization-Based Approach

Approximate String Membership Checking: A Multiple Filter, Optimization-Based Approach Chong Sun 1@cs.wisc.edu Abstract-- We consider the approximate string membership checking (ASMC) problem of extracting all the strings or substrings in a document that approximately match some string in a given dictionary. To solve

Barman, Siddharth

118

Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin

Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin Stephen J the error associated with histological parameters characterizing normal skin tissue. These parameters can be recovered from digital images of the skin using a physics-based model of skin coloration. The relationship

Claridge, Ela

119

Efficient electromagnetic optimization of microwave filters and multiplexers using rational models

A method is presented for the efficient optimization of microwave filters and multiplexers designed from an ideal prototype. The method is based on the estimation of a rational function adjusted to a reduced number of samples of the microwave device response obtained either through electromagnetic analysis or measurements. From this rational function, a circuital network having the previously known topology

Alejandro García-Lampérez; Sergio Llorente-Romano; Magdalena Salazar-Palma; Tapan K. Sarkar

2004-01-01

120

Optimization of self-acting step thrust bearings for load capacity and stiffness.

NASA Technical Reports Server (NTRS)

Linearized analysis of a finite-width rectangular step thrust bearing. Dimensionless load capacity and stiffness are expressed in terms of a Fourier cosine series. The dimensionless load capacity and stiffness were found to be a function of the dimensionless bearing number, the pad length-to-width ratio, the film thickness ratio, the step location parameter, and the feed groove parameter. The equations obtained in the analysis were verified. The assumptions imposed were substantiated by comparing the results with an existing exact solution for the infinite width bearing. A digital computer program was developed which determines optimal bearing configuration for maximum load capacity or stiffness. Simple design curves are presented. Results are shown for both compressible and incompressible lubrication. Through a parameter transformation the results are directly usable in designing optimal step sector thrust bearings.

Hamrock, B. J.

1972-01-01

121

Optimized one-step preparation of a bioactive natural product, guaiazulene-2,9-dione

NASA Astrophysics Data System (ADS)

We previously isolated a natural product, namely guaiazulene-2,9-dione showing strong antibacterial activity against Vibrio anguillarum, from a gorgonian Muriceides collaris collected in South China Sea. In this experiment, guaiazulene-2,9-dione was quantitatively synthesized with an optimized one-step bromine oxidation method using guaiazulene as the raw material. The key reaction condition including reaction time and temperature, drop rate of bromine, concentration of aqueous THF solution, respective molar ratio of guaiazulene to bromine and acetic acid, and concentration of guaiazulene in aqueous THF solution, were investigated individually at five levels each for optimization. Combined with the verification test to show the absolute yield of each optimization step, the final optimal condition was determined as: when a solution of 0.025 mmol mL-1 guaiazulene in 80% aqueous THF was treated with four volumes of bromine at a drop rate of 0.1 mL min-1 and four volumes of acetic acid at -5°C for three hours, the yield of guaiazulene-2,9-dione was 23.72%. This was the first report concerning optimized one-step synthesis to provide a convenient method for the large preparation of guaiazulene-2,9-dione.

Cheng, Canling; Li, Pinglin; Wang, Wei; Shi, Xuefeng; Zhang, Gang; Zhu, Hongyan; Wu, Rongcui; Tang, Xuli; Li, Guoqiang

2014-12-01

122

Fabrication-Tolerant Microstrip Quarter-Wave Stepped-Impedance Resonator Filter

The etching error causes serious frequency drift of a microstrip stepped-impedance resonator (SIR). This paper proposes a novel microstrip quarter-wave SIR structure, which is insensitive to fabrication tolerances. Inserting several ground strips in the low-impedance section of the resonator keeps the impedance ratio almost constant in spite of inaccurate fabrication. As an additional benefit, the resonator size would be miniaturized

Cheng-Hsien Liang; Chin-Hsiung Chen; Chi-Yang Chang

2009-01-01

123

Design and optimization of stepped austempered ductile iron using characterization techniques

Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.

Hernández-Rivera, J.L., E-mail: jose.hernandez@cimav.edu.mx [Centro de Investigación en Materiales Avanzados-Laboratorio Nacional de Nanotecnología, Miguel de Cervantes 120, Z.C. 31109, Chihuahua (Mexico); Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J. [Facultad de Ingeniería, Universidad Autónoma de San Luis Potosí, Sierra Leona 550, Lomas 2a. sección, Z.C. 78210, San Luis Potosí (Mexico)

2013-09-15

124

Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

NASA Technical Reports Server (NTRS)

Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

Sedlak, J.

1994-01-01

125

Optimal filter framework for automated, instantaneous detection of lesions in retinal images.

Automated detection of lesions in retinal images is a crucial step towards efficient early detection, or screening, of large at-risk populations. In particular, the detection of microaneurysms, usually the first sign of diabetic retinopathy (DR), and the detection of drusen, the hallmark of age-related macular degeneration (AMD), are of primary importance. In spite of substantial progress made, detection algorithms still produce 1) false positives-target lesions are mixed up with other normal or abnormal structures in the eye, and 2) false negatives-the large variability in the appearance of the lesions causes a subset of these target lesions to be missed. We propose a general framework for detecting and characterizing target lesions almost instantaneously. This framework relies on a feature space automatically derived from a set of reference image samples representing target lesions, including atypical target lesions, and those eye structures that are similar looking but are not target lesions. The reference image samples are obtained either from an expert- or a data-driven approach. Factor analysis is used to derive the filters generating this feature space from reference samples. Previously unseen image samples are then classified in this feature space. We tested this approach by training it to detect microaneurysms. On a set of images from 2739 patients including 67 with referable DR, DR detection area under the receiver-operating characteristic curve (AUC) was comparable (AUC=0.927) to our previously published red lesion detection algorithm (AUC=0.929). We also tested the approach on the detection of AMD, by training it to differentiate drusen from Stargardt's disease lesions, and achieved an AUC=0.850 on a set of 300 manually detected drusen and 300 manually detected flecks. The entire image processing sequence takes less than a second on a standard PC compared to minutes in our previous approach, allowing instantaneous detection. Free-response receiver-operating characteristic analysis showed the superiority of this approach over a framework where false positives and the atypical lesions are not explicitly modeled. A greater performance was achieved by the expert-driven approach for DR detection, where the designer had sound expert knowledge. However, for both problems, a comparable performance was obtained for both expert- and data-driven approaches. This indicates that annotation of a limited number of lesions suffices for building a detection system for any type of lesion in retinal images, if no expert-knowledge is available. We are studying whether the optimal filter framework also generalizes to the detection of any structure in other domains. PMID:21292586

Quellec, Gwénolé; Russell, Stephen R; Abramoff, Michael D

2011-02-01

126

Decoupled Control Strategy of Grid Interactive Inverter System with Optimal LCL Filter Design

NASA Astrophysics Data System (ADS)

This article presents a control strategy for a three-phase grid interactive voltage source inverter that links a renewable energy source to the utility grid through a LCL-type filter. An optimized LCL-type filter has been designed and modeled so as to reduce the current harmonics in the grid, considering the conduction and switching losses at constant modulation index (Ma). The control strategy adopted here decouples the active and reactive power loops, thus achieving desirable performance with independent control of active and reactive power injected into the grid. The startup transients can also be controlled by the implementation of this proposed control strategy: in addition to this, optimal LCL filter with lesser conduction and switching copper losses as well as core losses. A trade-off has been made between the total losses in the LCL filter and the Total Harmonic Distortion (THD%) of the grid current, and the filter inductor has been designed accordingly. In order to study the dynamic performance of the system and to confirm the analytical results, the models are simulated in the MATLAB/Simulink environment, and the results are analyzed.

Babu, B. Chitti; Anurag, Anup; Sowmya, Tontepu; Marandi, Debati; Bal, Satarupa

2013-09-01

127

Design Optimization of Vena Cava Filters: An application to dual filtration devices

Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.

Singer, M A; Wang, S L; Diachin, D P

2009-12-03

128

The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates’ weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450

Donner, René; Menze, Bjoern H.; Bischof, Horst; Langs, Georg

2013-01-01

129

Preparation and optimization of the laser thin film filter

NASA Astrophysics Data System (ADS)

A co-colored thin film device for laser-induced damage threshold test system is presented in this paper, to make the laser-induced damage threshold tester operating at 532nm and 1064nm band. Through TFC simulation software, a film system of high-reflection, high -transmittance, resistance to laser damage membrane is designed and optimized. Using thermal evaporation technique to plate film, the optical properties of the coating and performance of the laser-induced damage are tested, and the reflectance and transmittance and damage threshold are measured. The results show that, the measured parameters, the reflectance R >= 98%@532nm, the transmittance T >= 98%@1064nm, the laser-induced damage threshold LIDT >= 4.5J/cm2 , meet the design requirements, which lays the foundation of achieving laser-induced damage threshold multifunction tester.

Su, Jun-hong; Wang, Wei; Xu, Jun-qi; Cheng, Yao-jin; Wang, Tao

2014-08-01

130

Optimization of bandpass optical filters based on TiO2 nanolayers

NASA Astrophysics Data System (ADS)

The design and realization of high-quality bandpass optical filters are often very difficult tasks due to the strong correlation of the optical index of dielectric thin films to their final thickness, as observed in many industrial deposition processes. We report on the optimization of complex optical filters in the visible and NIR spectral ranges as realized by ion beam-assisted electron beam deposition of silica and titanium oxide multilayers. We show that this process always leads to amorphous films prior to thermal annealing. On the contrary, the optical dispersion of TiO2 nanolayers is highly dependent on their thickness, while this dependence vanishes for layers thicker than 100 nm. We demonstrate that accounting for this nonlinear dependence of the optical index is both very important and necessary in order to obtain high-quality optical filters.

Démarest, Nathalie; Deubel, Damien; Keromnès, Jean-Claude; Vaudry, Claude; Grasset, Fabien; Lefort, Ronan; Guilloux-Viry, Maryline

2015-01-01

131

Optimal Design of CSD Coefficient FIR Filters Subject to Number of Nonzero Digits

NASA Astrophysics Data System (ADS)

In a hardware implementation of FIR(Finite Impulse Response) digital filters, it is desired to reduce a total number of nonzero digits used for a representation of filter coefficients. In general, a design problem of FIR filters with CSD(Canonic Signed Digit) representation, which is efficient one for the reduction of numbers of multiplier units, is often considered as one of the 0-1 combinational problems. In such the problem, some difficult constraints make us prevent to linearize the problem. Although many kinds of heuristic approaches have been applied to solve the problem, the solution obtained by such a manner could not guarantee its optimality. In this paper, we attempt to formulate the design problem as the 0-1 mixed integer linear programming problem and solve it by using the branch and bound technique, which is a powerful method for solving integer programming problem. Several design examples are shown to present an efficient performance of the proposed method.

Ozaki, Yuichi; Suyama, Kenji

132

An adaptive-step primal-dual interior point algorithm for linear optimization

A common feature shared by most practical algorithms in interior point methods is the use of Mehrotra’s predictor–corrector algorithm in [S. Mehrotra, On the implementation of a (primal-dual) interior point method, SIAM Journal on Optimization 2 (1992) 575–601.] where the predictor step is never performed but it is used only to calculate an adaptive update, and thus instead of a

Min Kyung Kim; Yong-Hoon Lee; Gyeong-Mi Cho

2009-01-01

133

An optimized method of harvesting vibrational energy with a piezoelectric element using a step-down DC-DC converter is presented. In this configuration, the converter regulates the power flow from the piezoelectric element to the desired electronic load. Analysis of the converter in discontinuous current conduction mode results in an expression for the duty cycle-power relationship. Using parameters of the mechanical system,

Geffrey K. Ottman; Heath F. Hofmann; George A. Lesieutre

2003-01-01

134

Optimization of single-step tapering amplitude and energy detuning for high-gain FELs

NASA Astrophysics Data System (ADS)

We put forward a method to optimize the single-step tapering amplitude of undulator strength and initial energy tuning of electron beam to maximize the saturation power of high gain free-electron lasers (FELs), based on the physics of longitudinal electron beam phase space. Using the FEL simulation code GENESIS, we numerically demonstrate the accuracy of the estimations for parameters corresponding to the linac coherent light source and the Tesla test facility.

Li, He-Ting; Jia, Qi-Ka

2015-01-01

135

Background Malaria remains a major cause of morbidity and mortality worldwide. Flow cytometry-based assays that take advantage of fluorescent protein (FP)-expressing malaria parasites have proven to be valuable tools for quantification and sorting of specific subpopulations of parasite-infected red blood cells. However, identification of rare subpopulations of parasites using green fluorescent protein (GFP) labelling is complicated by autofluorescence (AF) of red blood cells and low signal from transgenic parasites. It has been suggested that cell sorting yield could be improved by using filters that precisely match the emission spectrum of GFP. Methods Detection of transgenic Plasmodium falciparum parasites expressing either tdTomato or GFP was performed using a flow cytometer with interchangeable optical filters. Parasitaemia was evaluated using different optical filters and, after optimization of optics, the GFP-expressing parasites were sorted and analysed by microscopy after cytospin preparation and by imaging cytometry. Results A new approach to evaluate filter performance in flow cytometry using two-dimensional dot blot was developed. By selecting optical filters with narrow bandpass (BP) and maximum position of filter emission close to GFP maximum emission in the FL1 channel (510/20, 512/20 and 517/20; dichroics 502LP and 466LP), AF was markedly decreased and signal-background improve dramatically. Sorting of GFP-expressing parasite populations in infected red blood cells at 90 or 95% purity with these filters resulted in 50-150% increased yield when compared to the standard filter set-up. The purity of the sorted population was confirmed using imaging cytometry and microscopy of cytospin preparations of sorted red blood cells infected with transgenic malaria parasites. Discussion Filter optimization is particularly important for applications where the FP signal and percentage of positive events are relatively low, such as analysis of parasite-infected samples with in the intention of gene-expression profiling and analysis. The approach outlined here results in substantially improved yield of GFP-expressing parasites, and requires decreased sorting time in comparison to standard methods. It is anticipated that this protocol will be useful for a wide range of applications involving rare events. PMID:22950515

2012-01-01

136

Optimization of a Permanent Step Mold Design for Mg Alloy Castings

NASA Astrophysics Data System (ADS)

The design of a permanent Step mold for the evaluation of the mechanical properties of light alloys has been reviewed. An optimized Step die with a different runner and gating systems is proposed to minimize the amount of casting defects. Numerical simulations have been performed to study the filling and solidification behavior of an AM60B alloy to predict the turbulence of the melt and the microshrinkage formation. The results reveal how a correct design of the trap in the runners prevents the backwave of molten metal, which could eventually reverse out and enter the die cavity. The tapered runner in the optimized die configuration gently leads the molten metal to the ingate, avoiding turbulence and producing a balanced die cavity filling. The connection between the runner system and the die cavity by means of a fan ingate produces a laminar filling in contrast with a finger-type ingate. Solidification defects such as shrinkage-induced microporosity, numerically predicted through a dimensionless version of the Niyama criterion, are considerably reduced in the optimized permanent Step mold.

Timelli, Giulio; Capuzzi, Stefano; Bonollo, Franco

2015-02-01

137

A rapid dried-filter paper plasma-spot analytical method was developed to quantify organic acids, amino acids, and glycines\\u000a simultaneously in a two-step derivatization procedure with good sensitivity and specificity. The new method involves a two-step\\u000a trimethylsilyl (TMS) - trif-luoroacyl (TFA) derivatization procedure using GC-MS\\/ selective ion monitoring (GC-MS\\/SIM). The\\u000a dried-filter paper plasma was fortified with an internal standard (tropate) as well

Hye-Ran Yoon

2007-01-01

138

Choosing the Optimal Clipping Ratio for Clipping and Filtering PAR-Reduction Scheme in OFDM

Clipping and Filtering on the Oversampled signal samples (CFO) is a simply and effective peak-to-average power ratio (PAR) reduction method for OFDM signal. However, the PAR- reduction performance and the bit error ratio (BER) performance of CFO are conflicting with each other. An analysis framework is proposed to select the optimum clipping ratio (CR) which optimizes the consumed power-to-noise ratio

Hua Yu; Gang Wei

2007-01-01

139

Polynomial systems approach to continuous-time weighted optimal linear filtering and prediction

The solution of the optimal weighted minimum-variance estimation problem is considered using a polynomial matrix description for the continuous-time linear system description, which allows for the possible presence of transport delays on the measurements. The filter or predictor is given by the solution of two diophantine equations and is equivalent (in the delay-free case) to the state equation form of

M. J. Grimble

1998-01-01

140

Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

NASA Technical Reports Server (NTRS)

Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

Downie, John D.

1992-01-01

141

Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

NASA Technical Reports Server (NTRS)

Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.

Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

1999-01-01

142

Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

NASA Technical Reports Server (NTRS)

Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.

Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

1998-01-01

143

Objectives Quantifying testicular homogenization resistant spermatid heads (HRSH) is a powerful indicator of spermatogenesis. These counts have traditionally been performed manually using a hemocytometer, but this method can be time consuming and biased. We aimed to develop a protocol to reduce debris for the application of automated counting, which would allow for efficient and unbiased quantification of rat HRSH. Findings We developed a filter-lysis protocol that effectively removes debris from rat testicular homogenates. After filtering and lysing the homogenates, we found no statistical differences between manual (classic and filter-lysis) and automated (filter-lysis) counts using one-way ANOVA with Bonferroni’s multiple comparison test. In addition, Pearson’s correlation coefficients were calculated to compare the counting methods and there was a strong correlation between the classic manual counts and the filter-lysis manual (r = 0.85, p = 0.002) and the filter-lysis automated (r = 0.89, p = 0.0005) counts. We also tested the utility of the automated method in a low dose exposure model known to decrease HRSH. Adult Fischer 344 rats exposed to 0.33% 2,5-hexanedione (HD) in the drinking water for 12 weeks demonstrated decreased body (p = 0.02) and testes (p = 0.002) weights. In addition, there was a significant reduction in the number of HRSH per testis (p = 0.002) when compared to control. Conclusions A filter-lysis protocol was optimized to purify rat testicular homogenates for automated HRSH counts. Automated counting systems yield unbiased data and can be applied to detect changes in the testis after low dose toxicant exposure. PMID:22240558

Pacheco, Sara E.; Anderson, Linnea M.; Boekelheide, Kim

2013-01-01

144

Noise reduction in biological step signals: application to saccadic EOG.

A weighted filter for noise reduction in nonrecurrent step signals where adaptive filtering cannot be applied is described. An optimal correction of a conventional finite impulse response (FIR) filter is achieved by using a priori knowledge of noise variance and a continuous estimation of the error signal's power. The weighted filter provides an optimal compromise between noise filtering and distortionless tracking. The prior knowledge required is that of the noise power and the lowest frequency in the noise spectrum. Application of the weighted filter to the saccadic electro-oculogram (EOG) results in better estimations of saccade duration and velocity. PMID:2287177

Bankman, I N; Thakor, N V

1990-11-01

145

The use of double base number system (DBNS) multiplier coefficients reduces the complexity and power consumption in the hardware implementation of FIR digital filters. The use of genetic algorithms for optimization of the constituent DBNS multiplier coefficients can further reduce the complexity of the digital filter. This paper presents a novel genetic algorithm based on correlative roulette selection (CRS) for

Sai Mohan Kilambi; Behrouz Nowrouzian

2006-01-01

146

Optimized particle-mesh Ewald/multiple-time step integration for molecular dynamics simulations

NASA Astrophysics Data System (ADS)

We develop an efficient multiple time step (MTS) force splitting scheme for biological applications in the AMBER program in the context of the particle-mesh Ewald (PME) algorithm. Our method applies a symmetric Trotter factorization of the Liouville operator based on the position-Verlet scheme to Newtonian and Langevin dynamics. Following a brief review of the MTS and PME algorithms, we discuss performance speedup and the force balancing involved to maximize accuracy, maintain long-time stability, and accelerate computational times. Compared to prior MTS efforts in the context of the AMBER program, advances are possible by optimizing PME parameters for MTS applications and by using the position-Verlet, rather than velocity-Verlet, scheme for the inner loop. Moreover, ideas from the Langevin/MTS algorithm LN are applied to Newtonian formulations here. The algorithm's performance is optimized and tested on water, solvated DNA, and solvated protein systems. We find CPU speedup ratios of over 3 for Newtonian formulations when compared to a 1 fs single-step Verlet algorithm using outer time steps of 6 fs in a three-class splitting scheme; accurate conservation of energies is demonstrated over simulations of length several hundred ps. With modest Langevin forces, we obtain stable trajectories for outer time steps up to 12 fs and corresponding speedup ratios approaching 5. We end by suggesting that modified Ewald formulations, using tailored alternatives to the Gaussian screening functions for the Coulombic terms, may allow larger time steps and thus further speedups for both Newtonian and Langevin protocols; such developments are reported separately.

Batcho, Paul F.; Case, David A.; Schlick, Tamar

2001-09-01

147

In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (?p), stopband error (?s), transition band error (?t), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647

Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

2015-01-01

148

This work aimed to inform the design of ceramic pot filters to be manufactured by the organization Pure Home Water (PHW) in Northern Ghana, and to model the flow through an innovative paraboloid-shaped ceramic pot filter. ...

Miller, Travis Reed

2010-01-01

149

Signal to noise ratio based filter optimization in triple energy window scatter correction.

Triple energy window (TEW) scatter correction estimates the contribution of scattered photons to the acquisition data by acquiring additional data through two narrow energy windows placed adjoined to the main (photopeak) energy window. The contribution is estimated by linear interpolation and then subtracted. Noise amplification is reduced by filtering both the photopeak scintigram and the scatter estimate. We have studied the filter settings of each filter using a physical phantom filled with a 201Tl-solution resulting in count densities comparable to clinical studies. The performance of order-8 Butterworth filters at different cut-off frequencies (CoFs) were compared based on signal to noise ratios (SNRs). The highest SNRs were obtained when the noisy scatter information was strongly filtered with the CoF less than or equal to 0.07 cycles/pixel (cpp). The best CoF for the filter of the photopeak image is object size dependent; smaller objects require a higher CoF. For objects with a size near the SPECT spatial resolution (approximately 15 mm) the optimal CoF is equal to 0.18 cpp. For larger objects (31.8 mm) the highest SNR was obtained with a CoF equal to 0.13 cpp. A CoF equal to 0.16 cpp is a good compromise for all objects with a diameter equal to the spatial resolution or larger. These results depend on the initial signal to noise ratio of the acquisition data and so on the count density. PMID:10984241

Blokland, K J; Winn, R D; Pauwels, E K

2000-08-01

150

Optimal hydrograph separation filter to evaluate transport routines of hydrological models

NASA Astrophysics Data System (ADS)

Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is an attempt to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to develop a benchmark model that can be used as a geochemical model itself, or to test the performance of process based hydro-geochemical models. The benchmark model quantifies the degree of knowledge that the stream flow time series itself contributes to the hydrochemical analysis. Results of the OHS show that the two HS fractions ("rapid" and "slow") differ according to the geochemical substances which were selected. The OHS parameters were then used to demonstrate how to develop benchmark model for hydro-chemical predictions. Finally, predictions of solute transport from a process-based hydrological model were compared to the proposed benchmark model. Our results indicate that the benchmark model illustrated and quantified the contribution of the modeling procedure better than only using traditional measures like r2 or the Nash-Sutcliffe efficiency.

Rimmer, Alon; Hartmann, Andreas

2014-05-01

151

A Pattern Search Filter Method for Nonlinear Programming without Derivatives

This paper formulates and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that either improves the objec- tive function value or the value of some function that measures the constraint violation. The new algorithm does not compute or approximate any derivatives, penalty constants or Lagrange

Charles Audet; J. E. Dennis

2004-01-01

152

doesn't allow optimal placement of Nafion® and Teflon® within the catalyst layer leading to coverage of active catalyst sites by Teflon®. By means of a two-step process the formation of the catalyst ink was separated into two parts. In the first step, a...

Friedmann, Roland

2009-03-05

153

In 2000 the implementation of quality by design (QbD) was introduced by the Food and Drug Administration (FDA) and described in the ICH Q8, Q9 and Q10 guidelines. Since that time, systematic optimization strategies for purification of biopharmaceuticals have gained a more important role in industrial process development. In this investigation, the optimization strategy was carried out by adopting design of experiments (DoE) in small scale experiments. A combination method comprising a desalting and a multimodal ion exchange step was used for the experimental runs via the chromatographic system ÄKTA™ avant. The multimodal resin Capto™ adhere was investigated as an alternative to conventional ion exchange and hydrophobic interaction resins for the intermediate purification of the potential malaria vaccine D1M1. The ligands, used in multimodal chromatography, interact with the target molecule in different ways. The multimodal functionality includes the binding of proteins in spite of the ionic strength of the loading material. The target protein binds at specific salt conditions and can be eluted by a step gradient decreasing the pH value and reducing the ionic strength. It is possible to achieve a maximized purity and recovery of the product because degradation products and other contaminants do not bind at specific salt concentrations at which the product still binds to the ligands. PMID:25271026

Paul, Jessica; Jensen, Sonja; Dukart, Arthur; Cornelissen, Gesine

2014-10-31

154

Optimized model of oriented-line-target detection using vertical and horizontal filters

NASA Astrophysics Data System (ADS)

A line-element target differing sufficiently in orientation from a background of line elements can be visually detected easily and quickly; orientation thresholds for such detection are lowest when the background elements are all vertical or all horizontal. A simple quantitative model of this performance was constructed from two classes of anisotropic filters, (2) nonlinear point transformation, and (3) estimation of a signal-to-noise ratio based on responses to images with and without a target. A Monte Carlo optimization procedure (simulated annealing) was used to determine the model parameter values required for providing an accurate description of psychophysical data on orientation increment thresholds.

Westland, Stephen; Foster, David H.

1995-08-01

155

NASA Astrophysics Data System (ADS)

A frequency domain implementation of the Optimal Trade-off Maximum Average Correlation Height (OT-MACH) filter has been optimized to classify target vehicles acquired from a Forward Looking Infra Red (FLIR) sensor. The clutter noise does not have a white spectrum and models employing the power spectral density of the background clutter require a predefined threshold. A method of automatically adjusting the noise model in the filter by using the input image statistical information has been introduced. Parameter surfaces for the remaining OT-MACH variables are calculated in order to determine optimal operating conditions for the view independent recognition of vehicles in highly cluttered FLIR imagery.

Alkandri, Ahmad; Gardezi, Akber; Birch, Philip; Young, Rupert; Chatwin, Chris

2011-04-01

156

This paper presents an overview of the development of a graphical software environment called Papillon DSP OptiStation for the design and constrained min-max optimization of multi-rate FIR and IIR digital filters. The optimization engine is required to handle simultaneously multiple objective functions and multiple arbitrary equality and inequality constraints. Moreover, it is required to handle not only infinite-precision optimization, but

Behrouz Nowrouzian; A. T. G. Fuller; F. Ashrafzadeh

1998-01-01

157

We consider design optimization of passively mode-locked two-section semiconductor lasers that incorporate intracavity grating spectral filters. Our goal is to develop a method for finding the optimal wavelength location for the filter in order to maximize the region of stable mode-locking as a function of drive current and reverse bias in the absorber section. In order to account for material dispersion in the two sections of the laser, we use analytic approximations for the gain and absorption as a function of carrier density and frequency. Fits to measured gain and absorption curves then provide inputs for numerical simulations based on a large signal accurate delay-differential model of the mode-locked laser. We show how a unique set of model parameters for each value of the drive current and reverse bias voltage can be selected based on the variation of the net gain along branches of steady-state solutions of the model. We demonstrate the validity of this approach by demonstrating qualitative agreement b...

O'Callaghan, Finbarr; O'Brien, Stephen

2014-01-01

158

NASA Astrophysics Data System (ADS)

A bi-objective optimization problem with Lipschitz objective functions is considered. An algorithm is developed adapting a univariate one-step optimal algorithm to multidimensional problems. The univariate algorithm considered is a worst-case optimal algorithm for Lipschitz functions. The multidimensional algorithm is based on the branch-and-bound approach and trisection of hyper-rectangles which cover the feasible region. The univariate algorithm is used to compute the Lipschitz bounds for the Pareto front. Some numerical examples are included.

Žilinskas, Antanas; Žilinskas, Julius

2015-04-01

159

Optimal hydrograph separation filter to evaluate transport routines of hydrological models

NASA Astrophysics Data System (ADS)

Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to benchmark the performance of process-based hydro-geochemical (HG) models. The new HG routine can be used to quantify the degree of knowledge that the stream flow time series itself contributes to the HG analysis, using newly developed benchmark geochemistry efficiency (BGE). Results of the OHS show that the two HS fractions (“rapid” and “slow”) differ according to the HG substances which were selected. The BFImax parameter (long-term ratio of baseflow to total streamflow) ranged from 0.26 to 0.94 for SO4-2 and total suspended solids, TSS, respectively. Then, predictions of SO4-2 transport from a process-based hydrological model were benchmarked with the proposed HG routine, in order to evaluate the significance of the HG routines in the process-based model. This comparison provides valuable quality test that would not be obvious when using the traditional measures like r2 or the NSE (Nash-Sutcliffe efficiency). The process-based model resulted in r2 = 0.65 and NSE = 0.65, while the benchmark routine results were slightly lower with r2 = 0.61 and NSE = 0.58. However, the comparison between the two model resulted in obvious advantage for the process-based model with BGE = 0.15.

Rimmer, Alon; Hartmann, Andreas

2014-06-01

160

Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. The cWIA ability to establish a mechanistic link between coronary haemodynamics measurements and the underlying pathophysiology has been widely demonstrated. Moreover, the prognostic value of a cWIA-derived metric has been recently proved. However, the clinical application of cWIA has been hindered due to the strong dependence on the practitioners, mainly ascribable to the cWIA-derived indices sensitivity to the pre-processing parameters. Specifically, as recently demonstrated, the cWIA-derived metrics are strongly sensitive to the Savitzky-Golay (S-G) filter, typically used to smooth the acquired traces. This is mainly due to the inability of the S-G filter to deal with the different timescale features present in the measured waveforms. Therefore, we propose to apply an adaptive S-G algorithm that automatically selects pointwise the optimal filter parameters. The newly proposed algorithm accuracy is assessed against a cWIA gold standard, provided by a newly developed in-silico cWIA modelling framework, when physiological noise is added to the simulated traces. The adaptive S-G algorithm, when used to automatically select the polynomial degree of the S-G filter, provides satisfactory results with ? 10% error for all the metrics through all the levels of noise tested. Therefore, the newly proposed method makes cWIA fully automatic and independent from the practitioners, opening the possibility to multi-centre trials. PMID:25571129

Rivolo, Simone; Nagel, Eike; Smith, Nicolas P; Lee, Jack

2014-08-01

161

Analysis of the rate-limiting step of an anaerobic biotrickling filter removing Sudeep C. Popat a

University, Durham, NC 27708, United States 1. Introduction Biological treatment of waste gases is rapidly of gaseous pollutants in biotrickling filters involves a series of complex physico-chemical and biological and of their relevance to the overall treatment performance remains sketchy. As a result, biotrickling filters

162

Optimization of pre- and post-filters in the presence of near and far-end crosstalk

Full-duplex data communications are considered over a linear, time-invariant, multi-input\\/multi-output channel. For both the continuous- and discrete-time cases, optimal multi-input\\/multi-output transmitter and receiver filters are derived using the minimum mean-square error (MSE) criterion, with a power constraint on the transmitted signal, in the presence of both near- and far-end crosstalk. The discrete-time problem is solved for two different filter models:

Pedro Crespo; Michael L. Honig; Kenneth Steiglitz

1989-01-01

163

Optimization of a One-Step Heat-Inducible In Vivo Mini DNA Vector Production System

While safer than their viral counterparts, conventional circular covalently closed (CCC) plasmid DNA vectors offer a limited safety profile. They often result in the transfer of unwanted prokaryotic sequences, antibiotic resistance genes, and bacterial origins of replication that may lead to unwanted immunostimulatory responses. Furthermore, such vectors may impart the potential for chromosomal integration, thus potentiating oncogenesis. Linear covalently closed (LCC), bacterial sequence free DNA vectors have shown promising clinical improvements in vitro and in vivo. However, the generation of such minivectors has been limited by in vitro enzymatic reactions hindering their downstream application in clinical trials. We previously characterized an in vivo temperature-inducible expression system, governed by the phage ? pL promoter and regulated by the thermolabile ? CI[Ts]857 repressor to produce recombinant protelomerase enzymes in E. coli. In this expression system, induction of recombinant protelomerase was achieved by increasing culture temperature above the 37°C threshold temperature. Overexpression of protelomerase led to enzymatic reactions, acting on genetically engineered multi-target sites called “Super Sequences” that serve to convert conventional CCC plasmid DNA into LCC DNA minivectors. Temperature up-shift, however, can result in intracellular stress responses and may alter plasmid replication rates; both of which may be detrimental to LCC minivector production. We sought to optimize our one-step in vivo DNA minivector production system under various induction schedules in combination with genetic modifications influencing plasmid replication, processing rates, and cellular heat stress responses. We assessed different culture growth techniques, growth media compositions, heat induction scheduling and temperature, induction duration, post-induction temperature, and E. coli genetic background to improve the productivity and scalability of our system, achieving an overall LCC DNA minivector production efficiency of ?90%.We optimized a robust technology conferring rapid, scalable, one-step in vivo production of LCC DNA minivectors with potential application to gene transfer-mediated therapeutics. PMID:24586704

Wettig, Shawn; Slavcev, Roderick A.

2014-01-01

164

Optimization of a one-step heat-inducible in vivo mini DNA vector production system.

While safer than their viral counterparts, conventional circular covalently closed (CCC) plasmid DNA vectors offer a limited safety profile. They often result in the transfer of unwanted prokaryotic sequences, antibiotic resistance genes, and bacterial origins of replication that may lead to unwanted immunostimulatory responses. Furthermore, such vectors may impart the potential for chromosomal integration, thus potentiating oncogenesis. Linear covalently closed (LCC), bacterial sequence free DNA vectors have shown promising clinical improvements in vitro and in vivo. However, the generation of such minivectors has been limited by in vitro enzymatic reactions hindering their downstream application in clinical trials. We previously characterized an in vivo temperature-inducible expression system, governed by the phage ? pL promoter and regulated by the thermolabile ? CI[Ts]857 repressor to produce recombinant protelomerase enzymes in E. coli. In this expression system, induction of recombinant protelomerase was achieved by increasing culture temperature above the 37°C threshold temperature. Overexpression of protelomerase led to enzymatic reactions, acting on genetically engineered multi-target sites called "Super Sequences" that serve to convert conventional CCC plasmid DNA into LCC DNA minivectors. Temperature up-shift, however, can result in intracellular stress responses and may alter plasmid replication rates; both of which may be detrimental to LCC minivector production. We sought to optimize our one-step in vivo DNA minivector production system under various induction schedules in combination with genetic modifications influencing plasmid replication, processing rates, and cellular heat stress responses. We assessed different culture growth techniques, growth media compositions, heat induction scheduling and temperature, induction duration, post-induction temperature, and E. coli genetic background to improve the productivity and scalability of our system, achieving an overall LCC DNA minivector production efficiency of ? 90%.We optimized a robust technology conferring rapid, scalable, one-step in vivo production of LCC DNA minivectors with potential application to gene transfer-mediated therapeutics. PMID:24586704

Nafissi, Nafiseh; Sum, Chi Hong; Wettig, Shawn; Slavcev, Roderick A

2014-01-01

165

The ARTcrystal process is a new approach for the production of drug nanocrystals. It is a combination of a special pre-treatment step with subsequent high pressure homogenization (HPH) at low pressures. In the pre-treatment step the particle size is already reduced to the nanometer range by use of the newly developed ART MICCRA rotor-stator system. In this study, the running parameters for the ART MICCRA system are systematically studied, i.e. temperature, stirring speed, flow rate, foaming effects, size of starting material, valve position from 0° to 45°. The antioxidant rutin was used as model drug. Applying optimized parameters, the pre-milling yielded already a nanosuspension with a photon correlation spectroscopy (PCS) diameter of about 650 nm. On lab scale production time was 5 min for 1L nanosuspension (5% rutin content), i.e. the capacity of the setup is also suitable for medium industrial scale production. Compared to other nanocrystal production methods (bead milling, HPH, etc.), similar sizes are achievable, but the process is more cost-effective, faster in time and easily scale-able, thus being an interesting novel process for nanocrystal production on lab and industrial scale. PMID:24556175

Scholz, Patrik; Arntjen, Anja; Müller, Rainer H; Keck, Cornelia M

2014-04-25

166

This paper presents a symmetric-type microstrip triple-band bandstop filter incorporating a tri-section meandered-line stepped impedance resonator (SIR). The length of each section of the meandered line is 0.16, 0.15, and 0.83 times the guided wavelength (? g ), so that the filter features three stop bands at 2.59?GHz, 6.88?GHz, and 10.67?GHz, respectively. Two symmetric SIRs are employed with a microstrip transmission line to obtain wide bandwidths of 1.12, 1.34, and 0.89?GHz at the corresponding stop bands. Furthermore, an equivalent circuit model of the proposed filter is developed, and the model matches the electromagnetic simulations well. The return losses of the fabricated filter are measured to be -29.90?dB, -28.29?dB, and -26.66?dB while the insertion losses are 0.40?dB, 0.90?dB, and 1.10?dB at the respective stop bands. A drastic reduction in the size of the filter was achieved by using a simplified architecture based on a meandered-line SIR. PMID:24319367

Dhakal, Rajendra; Kim, Nam-Young

2013-01-01

167

This paper presents a symmetric-type microstrip triple-band bandstop filter incorporating a tri-section meandered-line stepped impedance resonator (SIR). The length of each section of the meandered line is 0.16, 0.15, and 0.83 times the guided wavelength (?g), so that the filter features three stop bands at 2.59?GHz, 6.88?GHz, and 10.67?GHz, respectively. Two symmetric SIRs are employed with a microstrip transmission line to obtain wide bandwidths of 1.12, 1.34, and 0.89?GHz at the corresponding stop bands. Furthermore, an equivalent circuit model of the proposed filter is developed, and the model matches the electromagnetic simulations well. The return losses of the fabricated filter are measured to be ?29.90?dB, ?28.29?dB, and ?26.66?dB while the insertion losses are 0.40?dB, 0.90?dB, and 1.10?dB at the respective stop bands. A drastic reduction in the size of the filter was achieved by using a simplified architecture based on a meandered-line SIR. PMID:24319367

Kim, Nam-Young

2013-01-01

168

NASA Astrophysics Data System (ADS)

A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.

Wang, Dong; Sun, Shilong; Tse, Peter W.

2015-02-01

169

NASA Astrophysics Data System (ADS)

An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with experiments and other research results. The rotor cascade shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using the unsteady Navier-Stokes solver. Two objective functions were defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed. A parallel genetic algorithm was used as an optimizer and the penalty method was introduced. Each individual's objective function was computed simultaneously by using a 32 processor distributed memory computer. One optimization took about four days.

Lee, Eun Seok

2000-10-01

170

NSDL National Science Digital Library

Students learn how CCD cameras use color filters to create astronomical images in this Moveable Museum unit. The four-page PDF guide includes suggested general background readings for educators, activity notes, and step-by-step directions. Students look at black-and-white photos to understand gray scale and construct simple red and green cellophane filters and observe magazine images through them.

171

Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

NASA Technical Reports Server (NTRS)

An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

Simon, Donald L.; Garg, Sanjay

2011-01-01

172

A hydrodynamically levitated centrifugal blood pump with a semi-open impeller has been developed for mechanical circulatory assistance. However, a narrow bearing gap has the potential to cause hemolysis. The purpose of the present study is to optimize the geometric configuration of the hydrodynamic step bearing in order to reduce hemolysis by expansion of the bearing gap. First, a numerical analysis of the step bearing, based on lubrication theory, was performed to determine the optimal design. Second, in order to assess the accuracy of the numerical analysis, the hydrodynamic forces calculated in the numerical analysis were compared with those obtained in an actual measurement test using impellers having step lengths of 0%, 33%, and 67% of the vane length. Finally, a bearing gap measurement test and a hemolysis test were performed. As a result, the numerical analysis revealed that the hydrodynamic force was the largest when the step length was approximately 70%. The hydrodynamic force calculated in the numerical analysis was approximately equivalent to that obtained in the measurement test. In the measurement test and the hemolysis test, the blood pump having a step length of 67% achieved the maximum bearing gap and reduced hemolysis, as compared with the pumps having step lengths of 0% and 33%. It was confirmed that the numerical analysis of the step bearing was effective, and the developed blood pump having a step length of approximately 70% was found to be a suitable configuration for the reduction of hemolysis. PMID:23834855

Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

2013-09-01

173

The availability of microcomputer-based portable devices facilitates the high-volume multichannel biosignal acquisition and the analysis of their instantaneous oscillations and inter-signal temporal correlations. These new, non-invasively obtained parameters can have considerable prognostic or diagnostic roles. The present study investigates the inherent signal delay of the obligatory anti-aliasing filters. One cycle of each of the 8 electrocardiogram (ECG) and 4 photoplethysmogram signals from healthy volunteers or artificially synthesised series were passed through 100-80-60-40-20?Hz 2-4-6-8th order Bessel and Butterworth filters digitally synthesized by bilinear transformation, that resulted in a negligible error in signal delay compared to the mathematical model of the impulse- and step responses of the filters. The investigated filters have as diverse a signal delay as 2-46?ms depending on the filter parameters and the signal slew rate, which is difficult to predict in biological systems and thus difficult to compensate for. Its magnitude can be comparable to the examined phase shifts, deteriorating the accuracy of the measurement. As a conclusion, identical or very similar anti-aliasing filters with lower orders and higher corner frequencies, oversampling, and digital low pass filtering are recommended for biosignal acquisition intended for inter-signal phase shift analysis. PMID:25514627

Keresnyei, Róbert; Megyeri, Péter; Zidarics, Zoltán; Hejjel, László

2015-01-01

174

Process schemes for single-step syngas-to-dimethyl ether (DME) were developed in two stages: (1) the performance of the syngas-to-DME reactor was optimized with respect to the feed gas composition and (2) the optimal reactor feed gas system was integrated with synthesis gas generators. It was shown that the reactor performance is very sensitive to the Hâ:CO ratio in the feed gas.

X. D. Peng; A. W. Wang; B. A. Toseland; P. J. A. Tijm

1999-01-01

175

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS--II: EXPRESS BRIEFS, VOL. 51, NO. 3, MARCH 2004 105 Continuous-Time Filter Design Optimized for Reduced Die Area Charles Myers, Student Member, IEEE, Brandon for distributing capacitor and resistor area to optimally reduce die area in a given continuous-time filter design

Moon, Un-Ku

176

NASA Astrophysics Data System (ADS)

Correlation filters with three transmittance levels (+1, 0, and -1) are of interest in optical pattern recognition because they can be implemented on available spatial light modulators and because the zero level allows us to include a region of support (ROS). The ROS can provide additional control over the filter's noise tolerance and peak sharpness. A new algorithm based on optimizing a compromise average performance measure (CAPM) is proposed for designing three-level composite filters. The performance of this algorithm is compared to other three-level composite filter designs using a common image database and using figures of merit such as the Fisher ratio, error rate, and light efficiency. It is shown that the CAPM algorithm yields better results.

Hendrix, Charles D.; Vijaya Kumar, B. V. K.

1994-06-01

177

It is well known that canonical signed digit (CSD) multiplier coefficients reduce the complexity and power consumption requirements in the hardware implementation of FIR digital filters. Optimization of the constituent CSD multiplier coefficients using genetic algorithms can further reduce this complexity by constantly evolving from generation to generation based on the minimization of an objective fitness function modeled on the

Sai Mohan Kilambi; Behrouz Nowrouzian

2006-01-01

178

Solaires, Institut National de l'Energie Solaire (CEA-INES) 50, avenue du Lac LÃ©man, 73377 Le Bourget duWiring design based on Global Energy Requirement criteria: a first step towards optimization of DC distribution voltage International Conference on Renewable Energy and Eco-Design in Electrical Engineering

Paris-Sud XI, UniversitÃ© de

179

NASA Astrophysics Data System (ADS)

A hybrid data assimilation scheme designed for operational assimilation of satellite sea surface temperatures (SST) into an ocean model has been developed and validated against in-situ observations. The scheme consists of an optimal interpolation (OI) part and a greatly simplified Kalman filter (KF) part. The OI is performed only in the longitudinal and latitudinal directions. A climatological field is used as a background field for the interpolation. It is constructed by fitting daily averages of satellite SST to the annual mean, annual, and semiannual harmonics in a 20 km by 20 km grid. The background error covariance is approximated by a spatially varying two-dimensional exponential covariance model. The parameters of the covariance model are fitted to the deviations of the satellite data from the background field using data from a full year. The simplified KF uses ocean model forecasts as a background field. It is based on the assumption that it is possible to neglect horizontal SST covariances in the filter and that the typical time scale for vertical mixing in the mixed layer is much shorter than the average time between observations. We therefore assume that the error variance in a column of water is evenly spread out throughout the mixed layer. The result of these simplifications is a computationally very efficient KF. A one year validation of the scheme is performed for year 2001 using an operational eddy resolving ocean model covering the North Sea and the Baltic Sea. It is found that assimilation of sea surface temperature data reduces the model root mean square error from 1.13 °C to 0.70 °C. The hybrid scheme is found to reduce the root mean square error slightly more than the simplified KF without OI to 0.66 °C. The inclusion of spatially varying satellite error variances does not improve the performance of the scheme significantly.

Larsen, J.; Høyer, J. L.; She, J.

2007-03-01

180

Gaussian Filters for Nonlinear Filtering Problems

In this paper we develop and analyze real-time and accurate filters for nonlinear filtering problems based on the Gaussian distributions. We present the systematic formulation of Gaussian filters and develop efficient and accurate numerical integration of the optimal filter. We also discuss the mixed Gaussian filters in which the conditional probability density is approximated by the sum of Gaussian distributions.

Kazufumi Ito; Kaiqi Xiong

1999-01-01

181

In this paper, two novel microstrip quarter-wave SIR structures have been proposed. Inserting signal and ground strips in the microstrip line provides another method to lower the characteristic impedance. This approach applies to the low-impedance section of the SIR to decrease the impedance ratio significantly. We have designed and fabricated four four-pole cross-coupled filters with these two structures. A pair

Cheng-Hsien Liang; Wei-Shin Chang; Chi-Yang Chang

2008-01-01

182

Analog FIR Filter Used for Range-Optimal Pulsed Radar Applications

Matched filter is one of the most critical block in radar applications. With different measured range and relative velocity of a target we will need different bandwidth of the matched filter to maximize the radar signal to noise ratio (SNR...

Su, Eric Chen

2014-08-13

183

Optimization of Partition-Based Weighted Sum Filters and Their Application to Image Denoising

Partition-based Weighted Sum (P-WS) filtering is an effective method for processing nonstationary signals, especially those with regularly occurring structures, such as images. P-WS filters were originally formulated as Hard-partition Weighted Sum (HP-WS) filters and were successfully applied to image denoising. This formulation relied on intuitive arguments to generate the filter class. Here we present a statistical analysis that justifies the

Min Shao; Kenneth E. Barner

2006-01-01

184

Optimality of the maximum average correlation height filter for detection of targets in noise

A statistical analysis is provided for the properties of the recently developed maximum average correlation height (MACH) filter (Mahalanobis et al. 1994). It is shown that the MACH filter can be interpreted as an optimum filter for the detection of targets in additive noise. A rationale is given for using a popular peak-to-sidelobe ratio metric to characterize the output of

Abhijit Mahalanobis; B. V. K. Vijaya Kumar

1997-01-01

185

Multisource modeling of flattening filter free (FFF) beam and the optimization of model parameters

Purpose: With the introduction of flattening filter free (FFF) linear accelerators to radiation oncology, new analytical source models for a FFF beam applicable to current treatment planning systems is needed. In this work, a multisource model for the FFF beam and the optimization of involved model parameters were designed. Methods: The model is based on a previous three source model proposed by Yang [“A three-source model for the calculation of head scatter factors,” Med. Phys. 29, 2024–2033 (2002)]. An off axis ratio (OAR) of photon fluence was introduced to the primary source term to generate cone shaped profiles. The parameters of the source model were determined from measured head scatter factors using a line search optimization technique. The OAR of the photon fluence was determined from a measured dose profile of a 40×40 cm2 field size with the same optimization technique, but a new method to acquire gradient terms for OARs was developed to enhance the speed of the optimization process. The improved model was validated with measured dose profiles from 3×3 to 40×40 cm2 field sizes at 6 and 10 MV from a TrueBeam™ STx linear accelerator. Furthermore, planar dose distributions for clinically used radiation fields were also calculated and compared to measurements using a 2D array detector using the gamma index method. Results: All dose values for the calculated profiles agreed with the measured dose profiles within 0.5% at 6 and 10 MV beams, except for some low dose regions for larger field sizes. A slight overestimation was seen in the lower penumbra region near the field edge for the large field sizes by 1%–4%. The planar dose calculations showed comparable passing rates (>98%) when the criterion of the gamma index method was selected to be 3%?3 mm. Conclusions: The developed source model showed good agreements between measured and calculated dose distributions. The model is easily applicable to any other linear accelerator using FFF beams as the required data include only the measured PDD, dose profiles, and output factors for various field sizes, which are easily acquired during conventional beam commissioning process. PMID:21626926

Cho, Woong; Kielar, Kayla N.; Mok, Ed; Xing, Lei; Park, Jeong-Hoon; Jung, Won-Gyun; Suh, Tae-Suk

2011-01-01

186

Optimization by decomposition: A step from hierarchic to non-hierarchic systems

NASA Technical Reports Server (NTRS)

A new, non-hierarchic decomposition is formulated for system optimization that uses system analysis, system sensitivity analysis, temporary decoupled optimizations performed in the design subspaces corresponding to the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems. The approach amounts to a variation of the well-known method of subspace optimization modified so that the analysis of the entire system is eliminated from the subspace optimization and the subspace optimizations may be performed concurrently.

Sobieszczanski-Sobieski, Jaroslaw

1988-01-01

187

FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION

Some Jordan algebras were proved more than a decade ago to be an indispensable tool in the unified study of interior-point methods. By using it, we generalize the infeasible interior- point method for linear optimization of Roos (SIAM J. Optim., 16(4):1110-1136 (electronic), 2006) to symmetric optimization. This unifies the analysis for linear, second-order cone and semidefinite optimizations.

G. GU; M. ZANGIABADI; C. ROOS

188

Industrial-scale filter dryers, equipped with one or more microwave input ports, have been modelled with the aim of detecting existing criticalities, proposing possible solutions and optimizing the overall system efficiency and treatment homogeneity. Three different loading conditions have been simulated, namely the empty applicator, the applicator partially loaded by both a high-loss and low loss load whose dielectric properties correspond to the one measured on real products. Modeling results allowed for the implementation of improvements to the original design such as the insertion of a wave guide transition and a properly designed pressure window, modification of the microwave inlet's position and orientation, alteration of the nozzles' geometry and distribution, and changing of the cleaning metallic torus dimensions and position. Experimental testing on representative loads, as well as in production sites, allowed for the confirmation of the validity of the implemented improvements, thus showing how numerical simulation can assist the designer in removing critical features and improving equipment performances when moving from conventional heating to hybrid microwave-assisted processing. PMID:18350999

Leonelli, Cristina; Veronesi, Paolo; Grisoni, Fabio

2007-01-01

189

An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

NASA Technical Reports Server (NTRS)

A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

Litt, Jonathan S.

2005-01-01

190

An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

NASA Technical Reports Server (NTRS)

A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

Litt, Jonathan S.

2007-01-01

191

An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

NASA Technical Reports Server (NTRS)

A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

Litt, Jonathan S.

2007-01-01

192

STEP 8. The wet well stores filtered water before it is pumped into the air-stripping

- spects pump seals. STEP 1. Of the five in-service drinking-water wells, wells 4, 6 and 7 provide high of the importance and need to protect drink- ing-water sources, the report's purpose is to inform drinking-water by the staff of BNL's Water Treatment Facility (WTF) of the Energy & Utilities Division. Producing BNL

Ohta, Shigemi

193

We have developed a hydrodynamic levitation centrifugal blood pump with a semi-open impeller for a mechanically circulatory assist. The impeller levitated with original hydrodynamic bearings without any complicated control and sensors. However, narrow bearing gap has the potential for causing hemolysis. The purpose of the study is to investigate the geometric configuration of the hydrodynamic step bearing to minimize hemolysis by expansion of the bearing gap. Firstly, we performed the numerical analysis of the step bearing based on Reynolds equation, and measured the actual hydrodynamic force of the step bearing. Secondly, the bearing gap measurement test and the hemolysis test were performed to the blood pumps, whose step length were 0 %, 33 % and 67 % of the vane length respectively. As a result, in the numerical analysis, the hydrodynamic force was the largest, when the step bearing was around 70 %. In the actual evaluation tests, the blood pump having step 67 % obtained the maximum bearing gap, and was able to improve the hemolysis, compared to those having step 0% and 33%. We confirmed that the numerical analysis of the step bearing worked effectively, and the blood pump having step 67 % was suitable configuration to minimize hemolysis, because it realized the largest bearing gap. PMID:22254562

Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

2011-01-01

194

This paper is concerned with the development of a new modified branch-and-bound technique for the constrained min-max optimization of multi-rate digital filter transfer functions over the canonical signed-digit (CSD) coefficient space. The proposed optimization technique employs a nonsmooth minimization approach for the underlying continuous optimization problems. The resulting optimization technique can handle the simultaneous optimization of the magnitude and group-delay

Behrouz Nowrouzian; A. T. G. Fuller; F. Ashrafzdeh

1997-01-01

195

Optimization by decomposition: A step from hierarchic to non-hierarchic systems

NASA Technical Reports Server (NTRS)

A new, non-hierarchic decomposition is formulated for system optimization that uses system analysis, system sensitivity analysis, temporary decoupled optimizations performed in the design subspaces corresponding to the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems. The approach amounts to a variation of the well-known method of subspace optimization modified so that the analysis of the entire system is eliminated from the subspace optimization and the subspace optimizations may be performed concurrently.

Sobieszczanski-Sobieski, Jaroslaw

1989-01-01

196

Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

NASA Astrophysics Data System (ADS)

In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

197

A vectorial implementation of dynamic optimal power flow (DOPF) including wind farms was presented. The vectorization of DOPF was established by arranging the control variables and state variables according to the variable types and time intervals. The asynchronous generators in wind farms were modeled in Q-V formulation. A step-controlled primal-dual interior point framework (SCIPM) with upper and lower inequality constrains

Zhijun Qin; Yude Yang; Jiekang Wu

2008-01-01

198

Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity-in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:T1 scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components-acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues. PMID:18072488

Saito, Masatoshi

2007-11-01

199

The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.

Wang, S L; Singer, M A

2009-07-13

200

Influence of CO2 observations on the optimized CO2 flux in an ensemble Kalman filter

NASA Astrophysics Data System (ADS)

In this study, the effect of CO2 observations on an analysis of surface CO2 flux was calculated using an influence matrix in the CarbonTracker, which is an inverse modeling system for estimating surface CO2 flux based on an ensemble Kalman filter. The influence matrix represents a sensitivity of the analysis to observations. The experimental period was from January 2000 to December 2009. The diagonal element of the influence matrix (i.e., analysis sensitivity) is globally 4.8% on average, which implies that the analysis extracts 4.8% of the information from the observations and 95.2% from the background each assimilation cycle. Because the surface CO2 flux in each week is optimized by 5 weeks of observations, the cumulative impact over 5 weeks is 19.1%, much greater than 4.8%. The analysis sensitivity is inversely proportional to the number of observations used in the assimilation, which is distinctly apparent in continuous observation categories with a sufficient number of observations. The time series of the globally averaged analysis sensitivities shows seasonal variations, with greater sensitivities in summer and lower sensitivities in winter, which is attributed to the surface CO2 flux uncertainty. The time-averaged analysis sensitivities in the Northern Hemisphere are greater than those in the tropics and the Southern Hemisphere. The trace of the influence matrix (i.e., information content) is a measure of the total information extracted from the observations. The information content indicates an imbalance between the observation coverage in North America and that in other regions. Approximately half of the total observational information is provided by continuous observations, mainly from North America, which indicates that continuous observations are the most informative and that comprehensive coverage of additional observations in other regions is necessary to estimate the surface CO2 flux in these areas as accurately as in North America.

Kim, J.; Kim, H. M.; Cho, C.-H.

2014-12-01

201

NASA Astrophysics Data System (ADS)

Quasicrystalline solids were first observed in nature in 1980s. Their lattice geometry is devoid of translational symmetry; however it possesses long-range order as well as certain orders of rotational symmetry forbidden by translational symmetry. Mathematically, such lattices are related to aperiodic tilings. Since their discovery there has been great interest in utilizing aperiodic geometries for a wide variety of electromagnetic (EM) and optical applications. The first thrust of this dissertation addresses applications of quasicrystalline geometries for wideband antenna arrays and plasmonic nano-spherical arrays. The first application considered is the design of suitable antenna arrays for micro-UAV (unmanned aerial vehicle) swarms based on perturbation of certain types of aperiodic tilings. Due to safety reasons and to avoid possible collision between micro-UAVs it is desirable to keep the minimum separation distance between the elements several wavelengths. As a result typical periodic planar arrays are not suitable, since for periodic arrays increasing the minimum element spacing beyond one wavelength will lead to the appearance of grating lobes in the radiation pattern. It will be shown that using this method antenna arrays with very wide bandwidths and low sidelobe levels can be designed. It will also be shown that in conjunction with a phase compensation method these arrays show a large degree of versatility to positional noise. Next aperiodic aggregates of gold nano-spheres are studied. Since traditional unit cell approaches cannot be used for aperiodic geometries, we start be developing new analytical tools for aperiodic arrays. A modified version of generalized Mie theory (GMT) is developed which defines scattering coefficients for aperiodic spherical arrays. Next two specific properties of quasicrystalline gold nano-spherical arrays are considered. The optical response of these arrays can be explained in terms of the grating response of the array (photonic resonance) and the plasmonic response of the spheres (plasmonic resonance). In particular the couplings between the photonic and plasmonic modes are studied. In periodic arrays this coupling leads to the formation of a so called photonic-plasmonic hybrid mode. The formation of hybrid modes is studied in quasicrystalline arrays. Quasicrystalline structures in essence possess several periodicities which in some cases can lead to the formation of multiple hybrid modes with wider bandwidths. It is also demonstrated that the performance of these arrays can be further enhanced by employing a perturbation method. The second property considered is local field enhancements in quasicrystalline arrays of gold nanospheres. It will be shown that despite a considerably smaller filling factor quasicrystalline arrays generate larger local field enhancements which can be even further enhanced by optimally placing perturbing spheres within the prototiles that comprise the aperiodic arrays. The second thrust of research in this dissertation focuses on designing all-dielectric filters and metamaterial coatings for the optical range. In higher frequencies metals tend to have a high loss and thus they are not suitable for many applications. Hence dielectrics are used for applications in optical frequencies. In particular we focus on designing two types of structures. First a near-perfect optical mirror is designed. The design is based on optimizing a subwavelength periodic dielectric grating to obtain appropriate effective parameters that will satisfy the desired perfect mirror condition. Second, a broadband anti-reflective all-dielectric grating with wide field of view is designed. The second design is based on a new computationally efficient genetic algorithm (GA) optimization method which shapes the sidewalls of the grating based on optimizing the roots of polynomial functions.

Namin, Frank Farhad A.

202

P7.1 STUDY ON THE OPTIMAL SCANNING STRATEGIES OF PHASE-ARRAY RADAR THROUGH ENSEMBLE KALMAN FILTER Engineering University of Oklahoma 1. Introduction 1 The phased-array radar (PAR) of the National Weather

Xue, Ming

203

Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae. PMID:23028457

Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Sorribas, Albert; Jiménez, Laureano

2012-01-01

204

Linear variable filter optimization for emergency response chemical detection and discrimination

Linear variable filter design and fabrication for LWIR is now commercially available for use in the development of remote sensing systems. The linear variable filter is attached directly to the cold shield of the focal plane array. The resulting compact spectrometer assemblies are completely contained in the Dewar system. This approach eliminates many of the wavelength calibration problems associated with

Sylvia S. Shen; Paul E. Lewis

2010-01-01

205

Fast Tracking of Power Quality Disturbance Signals Using an Optimized Unscented Filter

This paper presents a hybrid approach for tracking the amplitude, phase, frequency, and harmonic content of power quality disturbance signals occurring in power networks using an unscented Kalman filter (UKF) and swarm intelligence. The UKF is a novel extension of the well-known extended Kalman filter (EKF) using an unscented transformation to overcome the difficulties of linearization and derivative calculations of

J. B. V. Reddy; Pradipta K. Dash; R. Samantaray; Alexander K. Moharana

2009-01-01

206

NASA Astrophysics Data System (ADS)

We propose a novel notch-filtering scheme for bit-rate transparent all-optical NRZ-to-PRZ format conversion. The scheme is based on a two-degree-of-freedom optimally designed fiber Bragg grating. It is shown that a notch filter optimized for any specific operating bit rate can be used to realize high-Q-factor format conversion over a wide bit rate range without requiring any tuning.

Cao, Hui; Shu, Xuewen; Atai, Javid; Zuo, Jun; Xiong, Bangyun; Shen, Fangcheng; Liu, xin; Cheng, Jianqun

2015-02-01

207

NASA Astrophysics Data System (ADS)

The Dynamic economic dispatch (DED) problem is an optimization problem with an objective to determine the optimal combination of power outputs for all generating units over a certain period of time in order to minimize the total fuel cost while satisfying dynamic operational constraints and load demand in each interval. Recently social foraging behavior of Escherichia coli bacteria has been explored to develop a novel algorithm for distributed optimization and control. The Bacterial Foraging Optimization Algorithm (BFOA) is currently gaining popularity in the community of researchers, for its effectiveness in solving certain difficult real-world optimization problems. This article comes up with a hybrid approach involving Particle Swarm Optimization (PSO) and BFO algorithms with varying chemo tactic step size for solving the DED problem of generating units considering valve-point effects. The proposed hybrid algorithm has been extensively compared with those methods reported in the literature. The new method is shown to be statistically significantly better on two test systems consisting of five and ten generating units.

Praveena, P.; Vaisakh, K.; Rama Mohana Rao, S.

208

Minimizing Total Communication Distance of a Time-Step Optimal Broadcast in Mesh Networks

the minimum TCD among all the possible TCDs from the same source node. An optimal TCD algorithm is the one that generates a minimum TCD among TCDs for all the possible source nodes, not just for a given source node

Wu, Jie

209

Time-Step Optimal Broadcasting in 3-D Meshes with Minimum Total Communication Distance

that generates a minimum TCD among all the possible TCDs from the same source node. An optimal TCD algorithm is the one that generates a minimum TCD among TCDs for all the possible source nodes, not just for a given

Wu, Jie

210

Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measure CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.

Gill, K; Aldoohan, S; Collier, J

2014-06-01

211

Numerical simulation and optimization of multi-step batch membrane processes

A simple numerical technique is presented for batch membrane filtration design. The underlying model accounts for variable solute rejection coefficients, and it has a modular structure which permits to easily describe the batch process involving different arrangements of the three typical basic steps: pre-concentration, dilution mode and final concentration. The experimental design required to set up the model is discussed,

Z. Kovács; M. Discacciati; W. Samhaber

2008-01-01

212

NASA Astrophysics Data System (ADS)

The numerical simulation method of turbulent flow in a running space of the working-stage in a centrifugal pump using the periodicity conditions has been formulated. The proposed method allows calculating the characteristic indices of one pump step at a lower computing resources cost. The comparison of the pump characteristics' calculation results with pilot data has been conducted.

Boldyrev, S. V.; Boldyrev, A. V.

2014-12-01

213

We describe two enhancements of the planar bilayer recording method which enable low-noise recordings of single-channel currents activated by voltage steps in planar bilayers formed on apertures in partitions separating two open chambers. First, we have refined a simple and effective procedure for making small bilayer apertures (25-80 micrograms diam) in plastic cups. These apertures combine the favorable properties of very thin edges, good mechanical strength, and low stray capacitance. In addition to enabling formation of small, low-capacitance bilayers, this aperture design also minimizes the access resistance to the bilayer, thereby improving the low-noise performance. Second, we have used a patch-clamp headstage modified to provide logic-controlled switching between a high-gain (50 G omega) feedback resistor for high-resolution recording and a low-gain (50 M omega) feedback resistor for rapid charging of the bilayer capacitance. The gain is switched from high to low before a voltage step and then back to high gain 25 microseconds after the step. With digital subtraction of the residual currents produced by the gain switching and electrostrictive changes in bilayer capacitance, we can achieve a steady current baseline within 1 ms after the voltage step. These enhancements broaden the range of experimental applications for the planar bilayer method by combining the high resolution previously attained only with small bilayers formed on pipette tips with the flexibility of experimental design possible with planar bilayers in open chambers. We illustrate application of these methods with recordings of the voltage-step activation of a voltage-gated potassium channel. PMID:1698470

Wonderlin, W F; Finkel, A; French, R J

1990-01-01

214

Constrained segment shapes in direct-aperture optimization for step-and-shoot IMRT

Previous studies have shown that, by optimizing segment shapes and weights directly, without explicitly optimizing fluence profiles, effective IMRT plans can be generated with fewer segments. This study proposes a method of direct-aperture optimization with aperture shape constraints, which is designed to provide segmental IMRT plans using a minimum of simple, regular segments. The method uses a cubic function to create smoothly curving multileaf collimator shapes. Constraints on segment dimension and equivalent square are applied, and each segment can be constrained to lie within the previous one, for easy generation of fluence profiles with a single maximum. To simply optimize the segment shapes and reject any shapes which violate the constraints is too inefficient, so an innovative method of feedback optimization is used to ensure in advance that viable aperture shapes are generated. The algorithm is demonstrated using a simple cylindrical phantom consisting of a hemi-annular planning target volume and a central cylindrical organ-at-risk. A simple IMRT rectum case is presented, where segments are used to replace a wedge. More complex cases of prostate and seminal vesicles and prostate and pelvic nodes are also shown. The algorithm produces effective plans in each case with three to five segments per beam. For the simple plans, the constraint that each segment should be contained within the previous one adds additional simplicity to the plan, for a small reduction in plan quality. This study confirms that direct-aperture optimization gives efficient solutions to the segmental IMRT inverse problem and provides a method for generating simple apertures. By using such a method, the workload of IMRT verification may be reduced and simplified, as verification of fluence profiles from individual beams may be eliminated.

Bedford, James L.; Webb, Steve [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey SM2 5PT (United Kingdom)

2006-04-15

215

Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

Jihong, Qu

2014-01-01

216

Optimization of 3D laser scanning speed by use of combined variable step

NASA Astrophysics Data System (ADS)

The problem of 3D TVS slow functioning caused by constant small scanning step becomes its solution in the presented research. It can be achieved by combined scanning step application for the fast search of n obstacles in unknown surroundings. Such a problem is of keynote importance in automatic robot navigation. To maintain a reasonable speed robots must detect dangerous obstacles as soon as possible, but all known scanners able to measure distances with sufficient accuracy are unable to do it in real time. So, the related technical task of the scanning with variable speed and precise digital mapping only for selected spatial sectors is under consideration. A wide range of simulations in MATLAB 7.12.0 of several variants of hypothetic scenes with variable n obstacles in each scene (including variation of shapes and sizes) and scanning with incremented angle value (0.6° up to 15°) is provided. The aim of such simulation was to detect which angular values of interval still permit getting the maximal information about obstacles without undesired time losses. Three of such local maximums were obtained in simulations and then rectified by application of neuronal network formalism (Levenberg-Marquradt Algorithm). The obtained results in its turn were applied to MET (Micro-Electro-mechanical Transmission) design for practical realization of variable combined step scanning on an experimental prototype of our previously known laser scanner.

Garcia-Cruz, X. M.; Sergiyenko, O. Yu.; Tyrsa, Vera; Rivas-Lopez, M.; Hernandez-Balbuena, D.; Rodriguez-Quiñonez, J. C.; Basaca-Preciado, L. C.; Mercorelli, P.

2014-03-01

217

Process schemes for single-step syngas-to-dimethyl ether (DME) were developed in two stages: (1) the performance of the syngas-to-DME reactor was optimized with respect to the feed gas composition and (2) the optimal reactor feed gas system was integrated with synthesis gas generators. It was shown that the reactor performance is very sensitive to the H{sub 2}:CO ratio in the feed gas. The optimal DME productivity and best material utilization were obtained with a feed gas containing 50% hydrogen and 50% carbon monoxide. In the second phase the syngas generation units considered were CO{sub 2}-methane reformer, steam-methane reformer, methane partial oxidation, and coal gasifier. The integration adjusts the H{sub 2}:CO ratio in natural gas-derived syngas to fit the optimal DME reactor operation and minimizes CO{sub 2} emissions and material loss. The technical feasibility of these schemes was demonstrated by simulations using realistic reactor models, kinetics, and thermodynamics under commercially relevant conditions.

Peng, X.D.; Wang, A.W.; Toseland, B.A.; Tijm, P.J.A.

1999-11-01

218

Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively. PMID:23296038

Tavakoli, Behnoosh; Zhu, Quing

2013-01-01

219

Metrics for comparing plasma mass filters

NASA Astrophysics Data System (ADS)

High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

Fetterman, Abraham J.; Fisch, Nathaniel J.

2011-10-01

220

Metrics For Comparing Plasma Mass Filters

High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter. __________________________________________________

Abraham J. Fetterman and Nathaniel J. Fisch

2012-08-15

221

Metrics for comparing plasma mass filters

High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

Fetterman, Abraham J.; Fisch, Nathaniel J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08540 (United States)

2011-10-15

222

This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.

Bethel, E. Wes; Bethel, E. Wes

2012-01-06

223

Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

NASA Technical Reports Server (NTRS)

Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.

Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

2011-01-01

224

Independent component analysis (ICA) aims at decomposing an observed random\\u000avector into statistically independent variables. Deflation-based\\u000aimplementations, such as the popular one-unit FastICA algorithm and its\\u000avariants, extract the independent components one after another. A novel method\\u000afor deflationary ICA, referred to as RobustICA, is put forward in this paper.\\u000aThis simple technique consists of performing exact line search optimization

Vicente Zarzoso; Pierre Comon

2010-01-01

225

NASA Astrophysics Data System (ADS)

The key outcome of this work is to propose and validate a fast and robust correlation scheme for face recognition applications. The robustness of this fast correlator is ensured by an adapted pre-processing step for the target image allowing us to minimize the impact of its (possibly noisy and varying) amplitude spectrum information. A segmented composite filter is optimized, at the very outset of its fabrication, by weighting each reference with a specific coefficient which is proportional to the occurrence probability. A hierarchical classification procedure (called two-level decision tree learning approach) is also used in order to speed up the recognition procedure. Experimental results validating our approach are obtained with a prototype based on GPU implementation of the all-numerical correlator using the NVIDIA GPU GeForce 8400GS processor and test samples from the Pointing Head Pose Image Database (PHPID), e.g. true recognition rates larger than 85% with a run time lower than 120 ms have been obtained using fixed images from the PHPID, true recognition rates larger than 77% using a real video sequence with 2 frame per second when the database contains 100 persons. Besides, it has been shown experimentally that the use of more recent GPU processor like NVIDIA-GPU Quadro FX 770M can perform the recognition of 4 frame per second with the same length of database.

Ouerhani, Y.; Jridi, M.; Alfalou, A.; Brosseau, C.

2013-02-01

226

In this study, the methods for extraction and purification of miraculin from Synsepalum dulcificum were investigated. For extraction, the effect of different extraction buffers (phosphate buffer saline, Tris-HCl and NaCl) on the extraction efficiency of total protein was evaluated. Immobilized metal ion affinity chromatography (IMAC) with nickel-NTA was used for the purification of the extracted protein, where the influence of binding buffer pH, crude extract pH and imidazole concentration in elution buffer upon the purification performance was explored. The total amount of protein extracted from miracle fruit was found to be 4 times higher using 0.5M NaCl as compared to Tris-HCl and phosphate buffer saline. On the other hand, the use of Tris-HCl as binding buffer gave higher purification performance than sodium phosphate and citrate-phosphate buffers in IMAC system. The optimum purification condition of miraculin using IMAC was achieved with crude extract at pH 7, Tris-HCl binding buffer at pH 7 and the use of 300mM imidazole as elution buffer, which gave the overall yield of 80.3% and purity of 97.5%. IMAC with nickel-NTA was successfully used as a single step process for the purification of miraculin from crude extract of S. dulcificum. PMID:25794715

He, Zuxing; Tan, Joo Shun; Lai, Oi Ming; Ariff, Arbakariya B

2015-08-15

227

NASA Technical Reports Server (NTRS)

Three algorithms are developed for designing finite impulse response digital filters to be used for pulse shaping and channel equalization. The first is the Minimax algorithm which uses linear programming to design a frequency-sampling filter with a pulse shape that approximates the specification in a minimax sense. Design examples are included which accurately approximate a specified impulse response with a maximum error of 0.03 using only six resonators. The second algorithm is an extension of the Minimax algorithm to design preset equalizers for channels with known impulse responses. Both transversal and frequency-sampling equalizer structures are designed to produce a minimax approximation of a specified channel output waveform. Examples of these designs are compared as to the accuracy of the approximation, the resultant intersymbol interference (ISI), and the required transmitted energy. While the transversal designs are slightly more accurate, the frequency-sampling designs using six resonators have smaller ISI and energy values.

Houts, R. C.; Vaughn, G. L.

1974-01-01

228

Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

NASA Technical Reports Server (NTRS)

A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

Simon, Donald L.; Garg, Sanjay

2010-01-01

229

Optimized design of four-zone phase pupil filter for nanoscale phase transition optical lithography

NASA Astrophysics Data System (ADS)

The present paper describes a method for decreasing the pit size in optical lithography by using combination of a four-zone annular binary phase filter and the phase transition material. The binary phase filter was designed by vector diffraction theory when linearly polarized light is focused under high numerical aperture objective lens ( NA=0.95), the figures of merit produced by this filter are as follows: compared with the diffraction limited spot, Strehl ratio S is 0.254, the spot size in the short axis direction is reduced down to 77.3%, the depth of focus is elongated to 317% for the super-resolved spot,. Then a phase transition material is placed in the focal plane of the objective lens, according to the threshold effect of the material, the groove linewidth and pit size can decrease down to about 0.2?, which is about 90nm at the wavelength of 405nm. Therefore, nanoscale phase transition optical lithography is realized, and the capacity and density of the optical memory devices can be increased up to 2-3 times the blu-ray disks.

Zha, Yikun; Wei, Jingsong; Gan, Fuxi

230

A one-step, recovery-enrichment broth, optimized Penn State University (oPSU) broth, was developed to consistently detect low levels of injured and uninjured Listeria monocytogenes cells in ready-to-eat foods. The oPSU broth contains special selective agents that inhibit growth of background flora without inhibiting recovery of injured Listeria cells. After recovery in the anaerobic section of oPSU broth, Listeria cells migrated to the surface, forming a black zone. This migration separated viable from nonviable cells and the food matrix, thereby reducing inhibitors that prevent detection by molecular methods. The high Listeria-to-background ratio in the black zone resulted in consistent detection of low levels of L. monocytogenes in pasteurized foods by both cultural and molecular methods, and greatly reduced both false-negative and false-positive results. oPSU broth does not require transfer to a secondary enrichment broth, making it less laborious and less subject to external contamination than 2-step enrichment protocols. Addition of 150mM D-serine prevented germination of Bacillus spores, but not the growth of vegetative cells. Replacement of D-serine with 12 mg/L acriflavin inhibited growth of vegetative cells of Bacillus spp. without inhibiting recovery of injured Listeria cells. oPSU broth may allow consistent detection of low levels of injured and uninjured cells of L. monocytogenes in pasteurized foods containing various background microflora. PMID:11990038

Knabel, Stephen J

2002-01-01

231

Covariance Matrix Adaption Evolution Strategy (CMA-ES). The implementation of the developed framework is used with a hypothetical project and tested for its robustness in updating the assumed initial project dynamics model and yielding the optimal control...

Bondugula, Srikant

2010-07-14

232

NASA Astrophysics Data System (ADS)

There is little information in scientific literature regarding the modifications induced by check dam systems in flow regimes within restored gully reaches, despite it being a crucial issue for the design of gully restoration measures. Here, we develop a conceptual model to classify flow regimes in straight rectangular channels for initial and dam-filling conditions as well as a method of estimating efficiency in order to provide design guidelines. The model integrates several previous mathematical approaches for assessing the main processes involved (hydraulic jump, impact flow, gradually varied flows). Ten main classifications of flow regimes were identified, producing similar results when compared with the IBER model. An interval for optimal energy dissipation (ODI) was observed when the steepness factor c was plotted against the design number (DN, ratio between the height and the product of slope and critical depth). The ODI was characterized by maximum energy dissipation and total influence conditions. Our findings support the hypothesis of a maximum flow resistance principle valid for a range of spacing rather than for a unique configuration. A value of c = 1 and DN ~ 100 was found to economically meet the ODI conditions throughout the different sedimentation stages of the structure. When our model was applied using the same parameters to the range typical of step-pool systems, the predicted results fell within a similar region to that observed in field experiments. The conceptual model helps to explain the spacing frequency distribution as well as the often-cited trend to lower c for increasing slopes in step-pool systems. This reinforces the hypothesis of a close link between stable configurations of step-pool units and man-made interventions through check dams.

Castillo, C.; Pérez, R.; Gómez, J. A.

2014-05-01

233

Topology optimization of dielectric substrates for filters and antennas using SIMP

Summary In this paper a novel design procedure based on the integration of full wave Finite Element Analysis (FEA) and a topology\\u000a design method employing Sequential Linear Programming (SLP) is introduced. The employed design method is the Solid Isotropic\\u000a Material with Penalization (SIMP) technique formulated as a general non-linear optimization problem. SLP is used to solve\\u000a the optimization problem with the

G. Kiziltas; N. Kikuchi; J. L. Volakis; J. Halloran

2004-01-01

234

Background In current practice, patients with chronic pancreatitis undergo surgical intervention in a late stage of the disease, when conservative treatment and endoscopic interventions have failed. Recent evidence suggests that surgical intervention early on in the disease benefits patients in terms of better pain control and preservation of pancreatic function. Therefore, we designed a randomized controlled trial to evaluate the benefits, risks and costs of early surgical intervention compared to the current stepwise practice for chronic pancreatitis. Methods/design The ESCAPE trial is a randomized controlled, parallel, superiority multicenter trial. Patients with chronic pancreatitis, a dilated pancreatic duct (? 5 mm) and moderate pain and/or frequent flare-ups will be registered and followed monthly as potential candidates for the trial. When a registered patient meets the randomization criteria (i.e. need for opioid analgesics) the patient will be randomized to either early surgical intervention (group A) or optimal current step-up practice (group B). An expert panel of chronic pancreatitis specialists will oversee the assessment of eligibility and ensure that allocation to either treatment arm is possible. Patients in group A will undergo pancreaticojejunostomy or a Frey-procedure in case of an enlarged pancreatic head (? 4 cm). Patients in group B will undergo a step-up practice of optimal medical treatment, if needed followed by endoscopic interventions, and if needed followed by surgery, according to predefined criteria. Primary outcome is pain assessed with the Izbicki pain score during a follow-up of 18 months. Secondary outcomes include complications, mortality, total direct and indirect costs, quality of life, pancreatic insufficiency, alternative pain scales, length of hospital admission, number of interventions and pancreatitis flare-ups. For the sample size calculation we defined a minimal clinically relevant difference in the primary endpoint as a difference of at least 15 points on the Izbicki pain score during follow-up. To detect this difference a total of 88 patients will be randomized (alpha 0.05, power 90%, drop-out 10%). Discussion The ESCAPE trial will investigate whether early surgery in chronic pancreatitis is beneficial in terms of pain relief, pancreatic function and quality of life, compared with current step-up practice. Trial registration ISRCTN: ISRCTN45877994 PMID:23506415

2013-01-01

235

Optimization of Input and Output Filters in Matrix Converter Drive System

NASA Astrophysics Data System (ADS)

This paper presents an AC-AC power converter integrated with techniques that provide environmental harmony. The voltage source PWM inverter has been established as the major motor drive equipment. However, it is associated with issues pertaining to PWM switching as well as issues related to the input harmonics caused by capacitor input type rectification. Hence, there is need for a converter that addresses these problems and provides an environmentally harmonious solution. The matrix converter has a topology that inherently exhibits sinusoidal input current waveforms and less stressful output voltage waveforms. Combining the matrix converter with certain filter topologies is shown to provide an environmentally harmonious solution.

Yamada, Kenji; Higuchi, Tsuyoshi; Hara, Hidenori; Yamamoto, Eiji; Kume, Tsuneo; Swamy, Mahesh M.

236

NASA Technical Reports Server (NTRS)

Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

1991-01-01

237

NASA Astrophysics Data System (ADS)

We present a long-range high spatial resolution optical frequency-domain reflectometry (OFDR) based on optimized deskew filter method. In proposed method, the frequency tuning nonlinear phase obtained from an auxiliary interferometer is used to compensate the nonlinear phase of the beating signals generated from a main OFDR interferometer using a deskew filter. The method can be applied for the entire spatial domain of the OFDR signals at once with a high computational efficiency. In addition, we apply the methods of higher orders of Taylor expansion and cepstrum analysis to improve the estimation accuracy of nonlinear phase. We experimentally achieve a measurement range of 80 km and a spatial resolution of 20 cm and 80 cm at distances of 10 km and 80 km that is about 187 times enhancement when compared with that of the same OFDR trace without nonlinearity compensation. The improved performance of the OFDR with the high spatial resolution, long measurement range and short process time will lead to practical applications in real-time monitoring and measurement of the optical fiber communication and sensing systems.

Ding, Zhenyang; Du, Yang; Liu, Tiegen; Yao, X. Steve; Feng, Bowen; Liu, Kun; Jiang, Junfeng

2014-11-01

238

Automatic estimation of optimal autoregressive filters for the analysis of volcanic seismic activity

NASA Astrophysics Data System (ADS)

Long-period (LP) events observed on volcanoes provide important information for volcano monitoring and for studying the physical processes in magmatic and hydrothermal systems. Of all the methods used to analyse this kind of seismicity, autoregressive (AR) modelling is particularly valuable, as it produces precise estimations of the frequencies and quality factors of the spectral peaks that are generated by resonance effects at seismic sources and, via deconvolution of the observed record, it allows the excitation function of the resonator to be determined. However, with AR modelling methods it is difficult to determine the order of the AR filter that will yield the best model of the signal. This note presents an algorithm to overcome this problem, together with some examples of applications. The approach described uses the kurtosis (fourth order cumulant) of the deconvolved signal to provide an objective criterion for selecting the filter order. This approach allows the partial automation of the AR analysis and thus provides interesting possibilities for improving volcano monitoring methods.

Lesage, P.

2008-04-01

239

A compact composite low-pass filter, designed by the image parameter method and semilumped component approach, will be described and results for cutoff frequency ranging from C- to V-band will be presented. This composite design combines four filter sections and the presence of a strong attenuation pole near the cutoff frequency provides an extremely sharp attenuation response, while ensuring good matching

Stephane Pinel; Ramanan Bairavasubramanian; Joy Laskar; John Papapolymerou

2005-01-01

240

The analysis of different Wavelets including novel Wavelet families based on atomic functions are presented, especially for ultrasound (US) and mammography (MG) images compression. This way we are able to determine with what type of filters Wavelet works better in compression of such images. Key properties: Frequency response, approximation order, projection cosine, and Riesz bounds were determined and compared for the classic Wavelets W9/7 used in standard JPEG2000, Daubechies8, Symlet8, as well as for the complex Kravchenko-Rvachev Wavelets ?(t) based on the atomic functions up(t),? fup (2)(t), and eup(t). The comparison results show significantly better performance of novel Wavelets that is justified by experiments and in study of key properties. PMID:21431590

Landin, Cristina Juarez; Reyes, Magally Martinez; Martin, Anabelem Soberanes; Rosas, Rosa Maria Valdovinos; Ramirez, Jose Luis Sanchez; Ponomaryov, Volodymyr; Soto, Maria Dolores Torres

2011-01-01

241

Shuttle filter study. Volume 1: Characterization and optimization of filtration devices

NASA Technical Reports Server (NTRS)

A program to develop a new technology base for filtration equipment and comprehensive fluid particulate contamination management techniques was conducted. The study has application to the systems used in the space shuttle and space station projects. The scope of the program is as follows: (1) characterization and optimization of filtration devices, (2) characterization of contaminant generation and contaminant sensitivity at the component level, and (3) development of a comprehensive particulate contamination management plane for space shuttle fluid systems.

1974-01-01

242

A central composite design (CCD) combined with response surface methodology (RSM) was employed for maximizing bioleaching yields of metals (Al, Mo, Ni, and V) from as-received spent refinery catalyst using Acidithiobacillus thiooxidans. Three independent variables, namely initial pH, sulfur concentration, and pulp density were investigated. The pH was found to be the most influential parameter with leaching yields of metals varying inversely with pH. Analysis of variance (ANOVA) of the quadratic model indicated that the predicted values were in good agreement with experimental data. Under optimized conditions of 1.0% pulp density, 1.5% sulfur and pH 1.5, about 93% Ni, 44% Al, 34% Mo, and 94% V was leached from the spent refinery catalyst. Among all the metals, V had the highest maximum rate of leaching (Vmax) according to the Michaelis-Menten equation. The results of the study suggested that two-step bioleaching is efficient in leaching of metals from spent refinery catalyst. Moreover, the process can be conducted with as received spent refinery catalyst, thus making the process cost effective for large-scale applications. PMID:25320861

Srichandan, Haragobinda; Pathak, Ashish; Kim, Dong Jin; Lee, Seoung-Won

2014-01-01

243

Optimal filtering and Bayesian detection for friction-based diagnostics in machines.

Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors. PMID:11515939

Ray, L R; Townsend, J R; Ramasubramanian, A

2001-01-01

244

To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

2015-01-01

245

To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

2015-01-01

246

Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

2010-01-01

247

Purpose: This article describes the spectral optimization of dual-energy computed tomography using balanced filters (bf-DECT) to reduce the tube loadings and dose by dedicating to the acquisition of electron density information, which is essential for treatment planning in radiotherapy. Methods: For the spectral optimization of bf-DECT, the author calculated the beam-hardening error and air kerma required to achieve a desired noise level in an electron density image of a 50-cm-diameter cylindrical water phantom. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. Results: The optimal combination of tube voltages was 80 kV/140 kV in conjunction with Tb/Hf and Bi/Mo filter pairs; this combination agrees with that obtained in a previous study [M. Saito, ''Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method,'' Med. Phys. 36, 3631-3642 (2009)], although the thicknesses of the filters that yielded a minimum tube output were slightly different from those obtained in the previous study. The resultant tube loading of a low-energy scan of the present bf-DECT significantly decreased from 57.5 to 4.5 times that of a high-energy scan for conventional DECT. Furthermore, the air kerma of bf-DECT could be reduced to less than that of conventional DECT, while obtaining the same figure of merit for the measurement of electron density and effective atomic number. Conclusions: The tube-loading and dose efficiencies of bf-DECT were considerably improved by sacrificing the quality of the noise level in the images of effective atomic number.

Saito, Masatoshi [Department of Radiological Technology, School of Health Sciences, Faculty of Medicine, Niigata University, Niigata 951-8518 (Japan)

2010-08-15

248

Nonlinear Attitude Filtering Methods

NASA Technical Reports Server (NTRS)

This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.

Markley, F. Landis; Crassidis, John L.; Cheng, Yang

2005-01-01

249

Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

Sipkin, S.A.

1987-01-01

250

NASA Astrophysics Data System (ADS)

The technique of multidimensional wave digital filtering (MDWDF) that builds on traveling wave formulation of lumped electrical elements, is successfully implemented on the study of dynamic responses of symmetrically laminated composite plate based on the first order shear deformation theory. The philosophy applied for the first time in this laminate mechanics relies on integration of certain principles involving modeling and simulation, circuit theory, and MD digital signal processing to provide a great variety of outstanding features. Especially benefited by the conservation of passivity gives rise to a nonlinear programming problem (NLP) for the issue of numerical stability of a MD discrete system. Adopting the augmented Lagrangian genetic algorithm, an effective optimization technique for rapidly achieving solution spaces of NLP models, numerical stability of the MDWDF network is well received at all time by the satisfaction of the Courant-Friedrichs-Levy stability criterion with the least restriction. In particular, optimum of the NLP has led to the optimality of the network in terms of effectively and accurately predicting the desired fundamental frequency, and thus to give an insight into the robustness of the network by looking at the distribution of system energies. To further explore the application of the optimum network, more numerical examples are engaged in efforts to achieve a qualitative understanding of the behavior of the laminar system. These are carried out by investigating various effects based on different stacking sequences, stiffness and span-to-thickness ratios, mode shapes and boundary conditions. Results are scrupulously validated by cross referencing with early published works, which show that the present method is in excellent agreement with other numerical and analytical methods.

Tseng, Chien-Hsun

2015-02-01

251

Filter and method of fabricating

A method of making a filter includes the steps of: providing a substrate having a porous surface; applying to the porous surface a coating of dry powder comprising particles to form a filter preform; and heating the filter preform to bind the substrate and the particles together to form a filter.

Janney, Mark A.

2006-02-14

252

NASA Technical Reports Server (NTRS)

This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

2013-01-01

253

NASA Astrophysics Data System (ADS)

An adaptive GA scheme is adopted for the optimal morphological filter design problem. The adaptive crossover and mutation rate which make the GA avoid premature and at the same time assure convergence of the program are successfully used in optimal morphological filter design procedure. In the string coding step, each string (chromosome) is composed of a structuring element coding chain concatenated with a filter sequence coding chain. In decoding step, each string is divided into 3 chains which then are decoded respectively into one structuring element with a size inferior to 5 by 5 and two concatenating morphological filter operators. The fitness function in GA is based on the mean-square-error (MSE) criterion. In string selection step, a stochastic tournament procedure is used to replace the simple roulette wheel program in order to accelerate the convergence. The final convergence of our algorithm is reached by a two step converging strategy. In presented applications of noise removal from texture images, it is found that with the optimized morphological filter sequences, the obtained MSE values are smaller than those using corresponding non-adaptive morphological filters, and the optimized shapes and orientations of structuring elements take approximately the same shapes and orientations as those of the image textons.

Li, Wei; Haese-Coat, Veronique; Ronsin, Joseph

1996-03-01

254

The extraction of biopharmaceutical proteins from intact leaves involves the release of abundant particulate contaminants that must be removed economically from the process stream before chromatography, for example, using disposable filters that comply with good manufacturing practice. We therefore scaled down an existing 200-kg process for the purification of two target proteins from tobacco leaves (the monoclonal antibody 2G12 and the fluorescent protein DsRed, as monitored by surface plasmon resonance spectroscopy and fluorescence imaging, respectively) and screened different materials on the 2-kg scale to reduce the number of depth filtration steps from three to one. We assessed filter cost and capacity, filtrate turbidity, and protein recovery when the filter materials were challenged with extracts from different tobacco varieties and related species grown in soil or rockwool. PDF4 was consistently the most suitable depth filter because it was the least expensive, it did not interact significantly with the target proteins, and it had the greatest overall capacity. The filter capacity was generally reduced when plants were grown in rockwool, but this substrate has a low bioburden, thus improving process safety. Our data concerning the clarification of plant extracts will help in the design of more cost-effective downstream processes and accelerate their development. PMID:24323869

Buyel, Johannes F; Fischer, Rainer

2014-03-01

255

Solar blind ultraviolet communication systems can provide short to medium range non line-of-sight and line-of-sight links which are covert and insensitive to meteorological conditions. These unique properties endow solar blind ultraviolet communication systems increasing applications. While optical filters are key components of these solar blind ultraviolet communication systems. Although filters can be designed in different forms, thin-film interference narrow-band filters

Guanliang Peng; Jiankun Yang; Honghui Jia; Shengli Chang; Juncai Yang

2007-01-01

256

A computer spreadsheet application has been developed for the optimization of step-gradient elution conditions as applied in coupled-column RPLC for online clean-up and separation in the analysis of pesticide residues. The procedure is based on the experimentally determined retention behaviour of the analytes as a function of mobile phase composition. Retention and peak volumes of the analytes eluting under isocratic

S. M. Gort; E. A. Hogendoorn; E. Dijkman; P. van Zoonen; R. Hoogerbrugge

1996-01-01

257

NASA Astrophysics Data System (ADS)

The use of polarized protons as neutron spin filter is an attractive alternative to the well established neutron polarization techniques, as the large, spin-dependent neutron scattering cross-section for protons is useful up to the sub-MeV region. Employing optically excited triplet states for the dynamic nuclear polarization (DNP) of the protons relieves the stringent requirements of classical DNP schemes, i.e low temperatures and strong magnetic fields, making technically simpler systems with open geometries possible. Using triplet DNP a record polarization of 71% has been achieved in a pentacene doped naphthalene single crystal at a field of 0.36 T using a simple helium flow cryostat for cooling. Furthermore, by placing the polarized crystal in a neutron optics focus and de-focus scheme, the actual sample cross-section could be increased by a factor 35 corresponding to an effective spin filter cross-section of 18×18 mm2.

Eichhorn, T. R.; Niketic, N.; van den Brandt, B.; Filges, U.; Panzner, T.; Rantsiou, E.; Wenckebach, W. Th.; Hautle, P.

2014-08-01

258

NASA Technical Reports Server (NTRS)

A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

Houts, R. C.; Burlage, D. W.

1972-01-01

259

Development of Golden Section Search Driven Particle Swarm Optimization and its Application

The particle swarm optimization (PSO), although it has been widely used in various fields, has a step-size problem, which deteriorates optimization performance. This problem is resolved using the golden section search (GSS) and the steepest descent method. We also design a filter that will improve optimization performance of the proposed algorithm. The effectiveness of the proposed algorithm, including for which

S. Oh; Y. Hori

2006-01-01

260

A filter family designed for use in quadrature mirror filter banks

This paper discusses a family of filters that have been designed for Quadrature Mirror Filter (QMF) Banks. These filters provide a significant improvement over conventional optimal equiripple and window designs when used in QMF banks. The performance criterion for these filters differ from those usually used for filter design in a way which makes the usual filter design techniques difficult

J. D. Johnston

1980-01-01

261

NASA Astrophysics Data System (ADS)

We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics steered by effective solvation forces allows huge outer time steps up to tens of picoseconds without affecting the equilibrium and conformational properties, and thus provides a 100- to 500-fold effective speedup in comparison to conventional MD with explicit solvent. With the statistical-mechanical 3D-RISM-KH account for effective solvation forces, the method provides efficient sampling of biomolecular processes with slow and/or rare solvation events such as conformational transitions of hydrated alanine dipeptide with the mean life times ranging from 30 ps up to 10 ns for "flip-flop" conformations, and is particularly beneficial for biomolecular systems with exchange and localization of solvent and ions, ligand binding, and molecular recognition.

Omelyan, Igor; Kovalenko, Andriy

2013-12-01

262

We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics steered by effective solvation forces allows huge outer time steps up to tens of picoseconds without affecting the equilibrium and conformational properties, and thus provides a 100- to 500-fold effective speedup in comparison to conventional MD with explicit solvent. With the statistical-mechanical 3D-RISM-KH account for effective solvation forces, the method provides efficient sampling of biomolecular processes with slow and/or rare solvation events such as conformational transitions of hydrated alanine dipeptide with the mean life times ranging from 30 ps up to 10 ns for “flip-flop” conformations, and is particularly beneficial for biomolecular systems with exchange and localization of solvent and ions, ligand binding, and molecular recognition.

Omelyan, Igor, E-mail: omelyan@ualberta.ca, E-mail: omelyan@icmp.lviv.ua [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Alberta T6G 2M9 (Canada) [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Alberta T6G 2M9 (Canada); Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G 2G8 (Canada); Institute for Condensed Matter Physics, National Academy of Sciences of Ukraine, 1 Svientsitskii Street, Lviv 79011 (Ukraine); Kovalenko, Andriy, E-mail: andriy.kovalenko@nrc-cnrc.gc.ca [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Alberta T6G 2M9 (Canada) [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Alberta T6G 2M9 (Canada); Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G 2G8 (Canada)

2013-12-28

263

NASA Technical Reports Server (NTRS)

Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.

Broussard, John R.

1987-01-01

264

In this paper, the long-term isolation characteristics of two typical filter-cake systems in a gas or water environment are investigated. The test models were designed to measure the sealing capability of a premium cement and filter-cake system used to prevent hydraulic communication at a permeable-nonpermeable boundary. The test models represented the area of a sandstone/shale layer in an actual well. In a real well, sandstone is a water- or gas-bearing formation, and sealing the annulus at the shale formation would prevent hydraulic communication to an upper productive zone. To simulate these conditions, the test models remained in a gas or water environment at either 80 or 150 F for periods of 3, 4, 30, and 90 days before the hydraulic isolation measurements were conducted. Models without filter cake, consisting of 100% cement, were tested for zonal isolation with the filter-cake models to provide reference points. These results show how critical filter-cake removal is to the long-term sealing of the cemented annulus. Results indicate that complete removal of the filter cake provides the greatest resistance to fluid communication in most of the cases studied.

Griffith, J.E.; Osisanya, S.

1995-12-31

265

NASA Astrophysics Data System (ADS)

A high-efficiency CdS/CdTe solar cell with step doped absorber layer, optimized back surface field layer, and long carrier lifetime in the absorption layer was designed. At first, The CdS/CdTe reference cell is simulated and compared with previous experimental data. In order to obtain the highest efficiency, the thickness and step doping of the absorber and back surface field layer were optimized. In addition, the effect of carrier lifetime variation in the CdTe layer on the conversion efficiency of CdTe cell was investigated. Compared with reference cell, Efficiency enhancement of the proposed structure was 4.44%. Under global AM 1.5 conditions, the optimized cell structure had an open-circuit voltage of 0.987 V, a short-circuit current density of 27.9 mA/cm^2 and a fill factor of 82.4%, corresponding to a total area conversion efficiency of 22.76%.

Khosroabadi, S.; Keshmiri, S. H.; Marjani, S.

2014-12-01

266

ERIC Educational Resources Information Center

The Optimal Aging Program (OAP) at the University of Arizona, College of Medicine is a longitudinal mentoring program that pairs students with older adults who are considered to be aging "successfully." This credit-bearing elective was initially established in 2001 through a grant from the John A. Hartford Foundation, and aims to expand the…

Sikora, Stephanie

2006-01-01

267

\\u000a The deep drawing process consists in realizing parts with complex shapes like different kind of boxes, cups, etc., from a\\u000a metal sheet. These parts are obtained through one or several stamping steps. The tool setup motion of the stamping process\\u000a is difficult to obtain. It requires practice and special knowledge on the process. The design is long and difficult to

Yann Ledoux; Eric Pairel; Robert Arrieux

268

NASA Technical Reports Server (NTRS)

A digital automatic control law to capture a steep glideslope and track the glideslope to a specified altitude is developed for the longitudinal/vertical dynamics of a CTOL aircraft using modern estimation and control techniques. The control law uses a constant gain Kalman filter to process guidance information from the microwave landing system, and acceleration from body mounted accelerometer data. The filter outputs navigation data and wind velocity estimates which are used in controlling the aircraft. Results from a digital simulation of the aircraft dynamics and the control law are presented for various wind conditions.

Halyo, N.

1976-01-01

269

Modelling of diffraction grating based optical filters for fluorescence detection of biomolecules

The detection of biomolecules based on fluorescence measurements is a powerful diagnostic tool for the acquisition of genetic, proteomic and cellular information. One key performance limiting factor remains the integrated optical filter, which is designed to reject strong excitation light while transmitting weak emission (fluorescent) light to the photodetector. Conventional filters have several disadvantages. For instance absorbing filters, like those made from amorphous silicon carbide, exhibit low rejection ratios, especially in the case of small Stokes’ shift fluorophores (e.g. green fluorescent protein GFP with ?exc = 480 nm and ?em = 510 nm), whereas interference filters comprising many layers require complex fabrication. This paper describes an alternative solution based on dielectric diffraction gratings. These filters are not only highly efficient but require a smaller number of manufacturing steps. Using FEM-based optical modelling as a design optimization tool, three filtering concepts are explored: (i) a diffraction grating fabricated on the surface of an absorbing filter, (ii) a diffraction grating embedded in a host material with a low refractive index, and (iii) a combination of an embedded grating and an absorbing filter. Both concepts involving an embedded grating show high rejection ratios (over 100,000) for the case of GFP, but also high sensitivity to manufacturing errors and variations in the incident angle of the excitation light. Despite this, simulations show that a 60 times improvement in the rejection ratio relative to a conventional flat absorbing filter can be obtained using an optimized embedded diffraction grating fabricated on top of an absorbing filter. PMID:25071964

Kova?i?, M.; Kr?, J.; Lipovšek, B.; Topi?, M.

2014-01-01

270

Finite element analysis and optimal design of the mudsill and bracket of large-scale bag filter

By using the finite element analysis software ANSYS, the model of the mudsill and bracket of large-scale bag filter was established, and the stress distribution and deformation of the structure were analyzed. Then under the prerequisite of the mudsill and bracket structure satisfied in possessing enough strength and stiffness and taking the mass of the structure to be the lightest

Wang Shijie; Zhang Zhichen; Liao Shanmei; Zhang Lei; Lv Guosheng; Liu Linzhi

2010-01-01

271

This paper describes the optimization of a load-bearing thermal insulation system characterized by hot and cold surfaces with a series of heat intercepts and insulators between them. The optimization problem is represented as a mixed variable programming (MVP) problem with nonlinear constraints, in which the objective is to minimize the power required to maintain the heat intercepts at fixed temperatures

Mark A. Abramson; Wright-Patterson AFB

2004-01-01

272

High accuracy motor controller for positioning optical filters in the CLAES Spectrometer

NASA Technical Reports Server (NTRS)

The Etalon Drive Motor (EDM), a precision etalon control system designed for accurate positioning of etalon filters in the IR spectrometer of the Cryogenic Limb Array Etalon Spectrometer (CLAES) experiment is described. The EDM includes a brushless dc torque motor, which has an infinite resolution for setting an etalon filter to any desired angle, a four-filter etalon wheel, and an electromechanical resolver for angle information. An 18-bit control loop provides high accuracy, resolution, and stability. Dynamic computer interaction allows the user to optimize the step response. A block diagram of the motor controller is presented along with a schematic of the digital/analog converter circuit.

Thatcher, John B.

1989-01-01

273

We found that with an increase of the potential barrier applied to metallic graphene ribbons, the Klein tunneling current decreases until it is totally destroyed and the pseudo-spin polarization increases until it reaches its maximum value when the current is zero. This inverse relation disfavors the generation of polarized currents in a sub-lattice. In this work we discuss the pseudo-spin control (polarization and inversion) of the Klein tunneling currents, as well as the optimization of these polarized currents in a sub-lattice, using potential barriers in metallic graphene ribbons. Using density of states maps, conductance results, and pseudo-spin polarization information (all of them as a function of the energy V and width of the barrier L), we found (V, L) intervals in which the polarized currents in a given sub-lattice are maximized. We also built parallel and series configurations with these barriers in order to further optimize the polarized currents. A systematic study of these maps and barrier configurations shows that the parallel configurations are good candidates for optimization of the polarized tunneling currents through the sub-lattice. Furthermore, we discuss the possibility of using an electrostatic potential as (i) a pseudo-spin filter or (ii) a pseudo-spin inversion manipulator, i.e. a possible latticetronic of electronic currents through metallic graphene ribbons. The results of this work can be extended to graphene nanostructures. PMID:24441476

López, Luis I A; Yaro, Simeón Moisés; Champi, A; Ujevic, Sebastian; Mendoza, Michel

2014-02-12

274

The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.

Wood, Claire [CTSI; Bremner, Brenda [CTSI

2013-08-09

275

Interpretation of 1H NMR spectra of organic compounds is sometimes hampered by the presence of strong peaks arising from residual nondeuterated solvent and water that obscure compound signals. Classical solvent suppression techniques such as presaturation or those based on pulsed field gradients are not effective in this regard because they also remove the compound resonances that overlap with the solvent signal being suppressed. Here, we propose an alternative scheme by using an optimized NMR diffusion filter that eliminates the nondesired peaks while retaining the signals of interest. This strategy has proved to be useful in three common deuterated solvents, namely, CDCl3, DMSO-d6, and CD3OD, resulting in clean spectra with no interference from solvent or water peaks. PMID:16709049

Esturau, Nuria; Espinosa, Juan F

2006-05-26

276

NASA Astrophysics Data System (ADS)

The existence of insoluble residues as intermediate products produced during the wet etching process is the main quality-reducing and structure-patterning issue for lead zirconate titanate (PZT) thin films. A one-step wet etching process using the solutions of buffered HF (BHF) and HNO3 acid was developed for patterning PZT thin films for microelectomechanical system (MEMS) applications. PZT thin films with 1 µm thickness were prepared on the Pt/Ti/SiO2/Si substrate by the sol-gel process for compatibility with Si micromachining. Various compositions of the etchant were investigated and the patterns were examined to optimize the etching process. The optimal result is demonstrated by a high etch rate (3.3 µm min-1) and low undercutting (1.1: 1). The patterned PZT thin film exhibits a remnant polarization of 24 µC cm-2, a coercive field of 53 kV cm-1, a leakage current density of 4.7 × 10-8 A cm-2 at 320 kV cm-1 and a dielectric constant of 1100 at 1 KHz.

Che, L.; Halvorsen, E.; Chen, X.

2011-10-01

277

and efficient when solving some small- and medium-size problems from the CUTE collection. Key words: sequential problems that meth- ods using a penalty function, including SQP methods and interior-point methods, can optimization. In each iteration, the linearized constraints of the quadratic programming are relaxed to satisfy

Yuan, Ya-xiang

278

A generalized adaptive mathematical morphological filter for LIDAR data

NASA Astrophysics Data System (ADS)

Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.

Cui, Zheng

279

Factoring wavelet transforms into lifting steps

This article is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with\\u000a finite filters can be decomposed into a finite sequence of simple filtering steps, which we call lifting steps but that are\\u000a also known as ladder structures. This decomposition corresponds to a factorization of the polyphase matrix of the wavelet\\u000a or

Ingrid Daubechies; Wim Sweldens

1998-01-01

280

In this article, a new spatial filtering approach, called discriminant common spatial patterns (dCSP), is proposed for single-trial EEG classification. Unlike the conventional common spatial patterns (CSP) that is substantially a subspace decomposition technique, dCSP is intently designed for discriminant purpose. The basic idea of dCSP is to construct a Fisher-like criterion that extracts both between-class and within-class discriminant information. The classical CSP only considers separating class means, i.e., between-class scatter, as well as possible. In contrast, dCSP aims to maximize between-class scatter and meanwhile minimize within-class scatter. Computationally, dCSP is formulated as a generalized eigenvalue problem. Experiments on real EEG classification show the effectiveness of the proposed method. PMID:21437733

Wang, Haixian

2011-09-01

281

NASA Astrophysics Data System (ADS)

Many spectral signature detection algorithms depend on numerically inverting covariance matrices. Hyperspectral data rarely span the full band space because of factors such as sensor noise, numerical round-off, sparse sampling, and band correlation inherent in the data or introduced by data processing. Processing the full order of the covariance matrix without regard to its useful rank leads to reduced detection performance. It was previously shown that the performance of inverse-covariance based detection algorithms can be improved by regularizing the covariance matrix inversion through extension of an optimally chosen eigenvalue. The extension method provides a robust way to optimize signal to clutter ratio (SCR) on data collected with a detector of uniform gain. The method of trusted eigenvalue extension has now been applied to data collected with a sensor with multiple gain regions. Multiple gain regions are used on wide spectral range sensors such as HYDICE and complicate the inversion of the covariance matrix over the full range of spectral bands. Further optimization of the trusted eigenvalue is presented and compared against traditional regularization methods. Since the extension method is particularly intended for sparsely sampled data with high dimensionality, a comparison is presented between the extension method and band coaddition.

Twede, David R.; Hayden, Andreas F.

2004-01-01

282

NASA Astrophysics Data System (ADS)

Many spectral signature detection algorithms depend on numerically inverting covariance matrices. Hyperspectral data rarely span the full band space because of factors such as sensor noise, numerical round-off, sparse sampling, and band correlation inherent in the data or introduced by data processing. Processing the full order of the covariance matrix without regard to its useful rank leads to reduced detection performance. It was previously shown that the performance of inverse-covariance based detection algorithms can be improved by regularizing the covariance matrix inversion through extension of an optimally chosen eigenvalue. The extension method provides a robust way to optimize signal to clutter ratio (SCR) on data collected with a detector of uniform gain. The method of trusted eigenvalue extension has now been applied to data collected with a sensor with multiple gain regions. Multiple gain regions are used on wide spectral range sensors such as HYDICE and complicate the inversion of the covariance matrix over the full range of spectral bands. Further optimization of the trusted eigenvalue is presented and compared against traditional regularization methods. Since the extension method is particularly intended for sparsely sampled data with high dimensionality, a comparison is presented between the extension method and band coaddition.

Twede, David R.; Hayden, Andreas F.

2003-12-01

283

Load Balancing of Parallelized Information Filters

Load Balancing of Parallelized Information Filters Neil C. Rowe, Member, IEEE Computer Society develop an analytic model for the costs and advantages of load rebalancing the parallel filtering, data parallelism, load balancing, information retrieval, conjunctions, optimality, and Monte Carlo

Rowe, Neil C.

284

An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.

Bergman, W.

1985-01-09

285

NASA Astrophysics Data System (ADS)

A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m2 g-1), micropore volumes (up to 0.78 cm3 g-1), and narrow micropore size distributions (0.7-1.1 nm). The CO2 uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO2 capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO2 capture performance and can efficiently adsorb CO2 molecules at 2.86 mmol g-1 and 4.92 mmol g-1 at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO2/N2 adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO2/N2 adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO2 adsorption in practical applications.A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m2 g-1), micropore volumes (up to 0.78 cm3 g-1), and narrow micropore size distributions (0.7-1.1 nm). The CO2 uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO2 capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO2 capture performance and can efficiently adsorb CO2 molecules at 2.86 mmol g-1 and 4.92 mmol g-1 at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO2/N2 adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO2/N2 adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO2 adsorption in practical applications. Electronic supplementary information (ESI) available: Fig. S1-13 and Table S1. See DOI: 10.1039/c3nr05825e

Wang, Jiacheng; Liu, Qian

2014-03-01

286

Aronia melanocarpa by-product from filter-tea factory was used for the preparation of extracts with high content of bioactive compounds. Extraction process was accelerated using sonication. Three level, three variable face-centered cubic experimental design (FCD) with response surface methodology (RSM) was used for optimization of extraction in terms of maximized yields for total phenolics (TP), flavonoids (TF), anthocyanins (MA) and proanthocyanidins (TPA) contents. Ultrasonic power (X?: 72-216 W), temperature (X?: 30-70 °C) and extraction time (X?: 30-90 min) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions for investigated responses. Three-dimensional surface plots were generated from the mathematical models. The optimal conditions for ultrasound-assisted extraction of TP, TF, MA and TPA were: X?=206.64 W, X?=70 °C, X?=80.1 min; X?=210.24 W, X?=70 °C, X?=75 min; X?=216 W, X?=70 °C, X?=45.6 min and X?=199.44 W, X?=70 °C, X?=89.7 min, respectively. Generated model predicted values of the TP, TF, MA and TPA to be 15.41 mg GAE/ml, 9.86 mg CE/ml, 2.26 mg C3G/ml and 20.67 mg CE/ml, respectively. Experimental validation was performed and close agreement between experimental and predicted values was found (within 95% confidence interval). PMID:25454824

Rami?, Milica; Vidovi?, Senka; Zekovi?, Zoran; Vladi?, Jelena; Cvejin, Aleksandra; Pavli?, Branimir

2015-03-01

287

A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m(2) g(-1)), micropore volumes (up to 0.78 cm(3) g(-1)), and narrow micropore size distributions (0.7-1.1 nm). The CO? uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO? capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO? capture performance and can efficiently adsorb CO? molecules at 2.86 mmol g(-1) and 4.92 mmol g(-1) at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO?/N? adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO?/N? adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO? adsorption in practical applications. PMID:24603950

Wang, Jiacheng; Liu, Qian

2014-04-21

288

In this study, the effect of three operating parameters, i.e., the first/second volumetric feeding ratio (milliliters/milliliters), the first anaerobic/aerobic (an/oxic) time ratio (minute/minute), and the second an/oxic time ratio (minute/minute), on the performance of a two-step fed sequencing batch reactor (SBR) system to treat swine wastewater for nutrients removal was examined. Central Composite Design, coupled with Response Surface Methodology, was employed to test these parameters at five levels in order to optimize the SBR to achieve the best removal efficiencies for six response variables including total nitrogen (TN), ammonium nitrogen (NH4-N), total phosphorus (TP), dissolved phosphorus (DP), chemical oxygen demand (COD), and biochemical oxygen demand (BOD). The results showed that the three parameters investigated had significant impact on all the response variables (TN, NH4-N, TP, DP, COD, and BOD), although the highest removal efficiency for each individual responses was associated with different combination of the three parameters. The maximum TN, NH4-N, TP, DP, COD, and BOD removal efficiencies of 96.38 %, 95.38 %, 93.62 %, 94.3 %, 95.26 %, and 92.84 % were obtained at the optimal first/second volumetric feeding ratio, first an/oxic time ratio, and second an/oxic time ratio of 3.23, 0.4, and 0.8 for TN; 2.64, 0.72, and 0.76 for NH4-N; 3.08, 1.16, and 1.07 for TP; 1.32, 0.81, and 1.0 for DP; 2.57, 0.96, and 1.12 for COD; and 1.62, 0.64, and 1.61 for BOD, respectively. Good linear relationships between the predicted and observed results for all the response variables were observed. PMID:25564205

Wu, Xiao; Zhu, Jun; Cheng, Jiehong; Zhu, Nanwen

2015-03-01

289

Biodegradation of alpha-pinene was investigated in a biological thermophilic trickling filter, using a lava rock and polymer beads mixture as packing material. Partition coefficient (PC) between alpha-pinene and the polymeric material (Hytrel G3548 L) was measured at 50 degrees C. PCs of 57 and 846 were obtained between the polymer and either the water or the gas phase, respectively. BTF experiments were conducted under continuous load feeding. The effect of yeast extract (YE) addition in the recirculating nutrient medium was evaluated. There was a positive relationship between alpha-pinene biodegradation, CO2 production and YE addition. A maximum elimination capacity (ECmax) of 98.9 g m(-3) h(-1) was obtained for an alpha-pinene loading rate of about 121 g m(-3) h(-1) in the presence of 1 g L(-1) YE. The ECmax was reduced by half in the absence of YE. It was also found that a decrease in the liquid flow rate enhances alpha-pinene biodegradation by increasing the ECmax up to 103 gm(-3) h(-1) with a removal efficiency close to 90%. The impact of short-term shock-loads (6 h) was tested under different process conditions. Increasing the pollutant load either 10- or 20-fold resulted in a sudden drop in the BTF's removal capacity, although this effect was attenuated in the presence of YE. PMID:25145201

Montes, M; Veiga, M C; Kennes, C

2014-01-01

290

NSDL National Science Digital Library

The first site related to water filtration is from the US Environmental Agency entitled EPA Environmental Education: Water Filtration (1 ). The two-page document explains the need for water filtration and the steps water treatment plants take to purify water. To further understand the process, a demonstration project is provided that illustrates these purification steps, which include coagulation, sedimentation, filtration, and disinfection. The second site is an interesting Flash animation called Filtration: How Does it Work (2 ) provided by Canada's Prairie Farm Rehabilitation Administration. Visitors will learn various types of filtration procedures and systems and the materials that are used such as carbon and sand. Next, from the National Science Foundation is a learning activity called Get Out the Gunk (3 ). Using just a few simple items from around the house, kids will be able to answer questions like "Does a filter work better with a lot of water rushing through, or a small trickle?" and "Does it make the water cleaner if you pour it through a filter twice?" The fourth Web site, Rapid Sand Filtration (4 ), is provided by Dottie Schmitt and Christie Shinault of Virginia Tech. The authors describe the process, which involves the flow of water through a bed of granular media, normally following settling basins in conventional water treatment trains to remove any particulate matter left over after flocculation and settling. Along with its thorough description, readers can view illustrations and photographs that further explain the process. The Vegetative Buffer Strips for Improved Surface Water Quality (5) Web site is provided by the Iowa State University Extension office. The document explains what vegetative buffer strips are, how they filter contaminants and sediment from surface water, how effective they are, and more. The sixth offering is a file called Infiltration Basins and Trenches (6) that is offered by the University of Wisconsin Extension. These structures are intended to collect water, have it infiltrate into the ground, and have it purified along the way. This document explains how effective they are at removing pollutants, how to install them, design guidelines, maintenance, and more. Next, from a site called Wilderness Survial.net is the Water Filtration Devices (7) page. Visitors read how to make a filtering system out of cloth, sand, crushed rock, charcoal, or a hollow log, although as is stated, the water still has to be purified. The last site, from the US Geological Survey, is called A Visit to a Wastewater-Treatment Plant: Primary Treatment of Wastewater (8). Although geared towards children, the site does a good job of explaining what happens at each stage of the treatment process and how pollutants are removed to help keep water clean. Everything from screening, pumping, aerating, sludge and scum removal, killing bacteria, and what is done with wastewater residuals is covered.

Brieske, Joel A.

2003-01-01

291

NASA Astrophysics Data System (ADS)

Measurement of pressure changes in monitoring wells located in a formation overlying an injection formation can provide an early warning for CO2 or brine leakage. If this strategy is to be part of an overall monitoring framework, then questions about how many monitoring wells are needed to detect a leakage event, and where should these well be placed, need to be addressed. In this study we present a methodology that uses a combination of a Kalman filter, a physically-based analytical model that solves for pressure propagation across old/abandoned leaky wells in a multi-formation system, and a multi-objective genetic algorithm, to answer the questions of how many wells should be used and where should they be placed. The Kalman filter is used to explore the covariance reduction based on possible well positions. The physically-based model is used to simulate, in a Monte Carlo scheme, a wide range of possible leakage scenarios where the random variable is the permeability of the old/abandoned leaky wells. The multi-objective genetic algorithm employed in this work is the Non-dominated Sorting Genetic Algorithm (NSGA-II), which is used to optimize three objectives: (i)The reduction of the total variance of the pressure field, (ii) the reduction of the number of wells used to detect a leakage event, and (iii) the reduction of the detection of leakage events which are not "harmful". In this work a "harmful" leakage event refers to an event in which the pressure change in the monitoring formation is large enough to induce leakage into the deepest potable water formation. The methodology is applied to a synthetic case study, which serves to prove the applicability of the methods and to gather insights on the strengths and weaknesses of using pressure monitoring wells to detect a CO2 leakage event.

Nogues, J. P.; Nordbotten, J. M.; Celia, M. A.

2012-12-01

292

Medical devices (MDs) for infusion and enteral and parenteral nutrition are essentially made of plasticized polyvinyl chloride (PVC). The first step in assessing patient exposure to these plasticizers, as well as ensuring that the MDs are free from di(2-ethylhexyl) phthalate (DEHP), consists of identifying and quantifying the plasticizers present and, consequently, determining which ones are likely to migrate into the patient's body. We compared three different extraction methods using 0.1 g of plasticized PVC: Soxhlet extraction in diethyl ether and ethyl acetate, polymer dissolution, and room temperature extraction in different solvents. It was found that simple room temperature chloroform extraction under optimized conditions (30 min, 50 mL) gave the best separation of plasticizers from the PVC matrix, with extraction yields ranging from 92 to 100 % for all plasticizers. This result was confirmed by supplemented Fourier transform infrared spectroscopy-attenuated total reflection (FTIR-ATR) and gravimetric analyses. The technique was used on eight marketed medical devices and showed that they contained different amounts of plasticizers, ranging from 25 to 36 % of the PVC weight. These yields, associated with the individual physicochemical properties of each plasticizer, highlight the need for further migration studies. PMID:25577357

Bernard, Lise; Cueff, Régis; Bourdeaux, Daniel; Breysse, Colette; Sautou, Valérie

2015-02-01

293

Hot-gas filter manufacturing assessments: Volume 5. Final report, April 15, 1997

The development of advanced filtration media for advanced fossil-fueled power generating systems is a critical step in meeting the performance and emissions requirements for these systems. While porous metal and ceramic candle-filters have been available for some time, the next generation of filters will include ceramic-matrix composites (CMCs), intermetallic alloys, and alternate filter geometries. The goal of this effort was to perform a cursory review of the manufacturing processes used by 5 companies developing advanced filters from the perspective of process repeatability and the ability for their processes to be scale-up to production volumes. It was found that all of the filter manufacturers had a solid understanding of the product development path. Given that these filters are largely developmental, significant additional work is necessary to understand the process-performance relationships and projecting manufacturing costs. While each organization had specific needs, some common among all of the filter manufacturers were access to performance testing of the filters to aide process/product development, a better understanding of the stresses the filters will see in service for use in structural design of the components, and a strong process sensitivity study to allow optimization of processing.

Boss, D.E.

1997-12-31

294

Digital camera filter design for colorimetric and spectral accuracy

A filter optimization was investigated to design a set of filters for a five channel multi-spectral camera, three of which result in high colorimetric performance when used alone, and the full set having high quality spectral performance. Each candidate filter was selected from a set of 33 glass filters with three different thicknesses where filters may be combined in optical

Francisco H. Imai; Shuxue Quan; Mitchell R. Rosen; Roy S. Berns

2001-01-01

295

ERIC Educational Resources Information Center

Presents the 1978 literature review of wastewater treatment. The review is concerned with biological filters, and it covers: (1) trickling filters; (2) rotating biological contractors; and (3) miscellaneous reactors. A list of 14 references is also presented. (HM)

Klemetson, S. L.

1978-01-01

296

Nine digital filters for decimation and interpolation

Filtering is necessary in decimation (decreasing the sampling rate of) or interpolation (increasing the sampling rate of a digital signal. If the rate change is substantial, the process is more efficient when the decimation or interpolation occurs in stages rather than in one step. Half-band filters are particularly efficient for effecting octave changes in sampling rate and nine digital filters

DAVID J. GOODMAN; MICHAEL J. CAREY

1977-01-01

297

NSDL National Science Digital Library

In this activity, students filter different substances through a plastic window screen, different sized hardware cloth and poultry netting. Their model shows how the thickness of a filter in the kidney is imperative in deciding what will be filtered out and what will stay within the blood stream.

2014-09-18

298

Method of statistical filtering

NASA Technical Reports Server (NTRS)

Minimal formula for bounding the cross correlation between a random forcing function and the state error when this correlation is unknown is used in optimal linear filter theory applications. Use of the bound results in overestimation of the estimation-error covariance.

Battin, R. H.; Deckert, J. C.; Fraser, D. C.; Potter, J. E.

1970-01-01

299

Orbit determination via adaptive Gaussian swarm optimization

NASA Astrophysics Data System (ADS)

Accurate orbit determination (OD) is vital for every space mission. This paper proposes a novel heuristic filter based on adaptive sample-size Gaussian swarm optimization (AGSF). The proposed estimator considers the OD as a stochastic dynamic optimization problem that utilizes a swarm of particles in order to find the best estimation at every time step. One of the key contributions of this paper is the adaptation of the swarm size using a weighted variance approach. The proposed strategy is simulated for a low Earth orbit (LEO) OD problem utilizing geomagnetic field measurements at 700 km altitude. The performance of the proposed AGSF is verified using Monte Carlo simulation whose results are compared with other advanced sample based nonlinear filters. It is demonstrated that the adopted filter achieves about 2.5 km accuracy in position estimation that fulfills the essential requirements of accuracy and convergence time for OD problem.

Kiani, Maryam; Pourtakdoust, Seid H.

2015-02-01

300

A vertical vessel having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas.

Haldipur, Gaurang B. (Monroeville, PA); Dilmore, William J. (Murrysville, PA)

1992-01-01

301

A vertical vessel is described having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas. 18 figs.

Haldipur, G.B.; Dilmore, W.J.

1992-09-01

302

Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory

2009-01-01

303

Filtering, stability, and robustness

NASA Astrophysics Data System (ADS)

The theory of nonlinear filtering concerns the optimal estimation of a Markov signal in noisy observations. Such estimates necessarily depend on the model that is chosen for the signal and observations processes. This thesis studies the sensitivity of the filter to the choice of underlying model over long periods of time, within the framework of continuous time filtering with white noise type observations. The first topic of this thesis is the asymptotic stability of the filter, which is studied using the theory of conditional diffusions. This leads to improvements on pathwise stability bounds, and to new insight into existing stability results in a fully probabilistic setting. Furthermore, I develop in detail the theory of conditional diffusions for finite-state Markov signals and clarify the duality between estimation and stochastic control in this context. The second topic of this thesis is the sensitivity of the nonlinear filter to the model parameters of the signal and observations processes. This section concentrates on the finite state case, where the corresponding model parameters are the jump rates of the signal, the observation function, and the initial measure. The main result is that the expected difference between the filters with the true and modified model parameters is bounded uniformly on the infinite time interval, provided that the signal process satisfies a mixing property. The proof uses properties of the stochastic flow generated by the filter on the simplex, as well as the Malliavin calculus and anticipative stochastic calculus. The third and final topic of this thesis is the asymptotic stability of quantum filters. I begin by developing quantum filtering theory using reference probability methods. The stability of the resulting filters is not easily studied using the preceding methods, as smoothing violates the nondemolition requirement. Fortunately, progress can be made by randomizing the initial state of the filter. Using this technique, I prove that the filtered estimate of the measurement observable is stable regardless of the underlying model, provided that the initial states are absolutely continuous in a suitable sense.

van Handel, Ramon

304

Quadratic Gabor filters for object detection.

We present a new class of quadratic filters that are capable of creating spherical, elliptical, hyperbolic and linear decision surfaces which result in better detection and classification capabilities than the linear decision surfaces obtained from correlation filters. Each filter comprises of a number of separately designed linear basis filters. These filters are linearly combined into several macro filters; the output from these macro filters are passed through a magnitude square operation and are then linearly combined using real weights to achieve the quadratic decision surface. For detection, the creation of macro filters (linear combinations of multiple single filters) allows for a substantial computational saving by reducing the number of correlation operations required. In this work, we consider the use of Gabor basis filters; the Gabor filter parameters are separately optimized. The fusion parameters to combine the Gabor filter outputs are optimized using an extended piecewise quadratic neural network (E-PQNN). We demonstrate methods for selecting the number of macro Gabor filters, the filter parameters and the linear and nonlinear combination coefficients. We present preliminary results obtained for an infrared (IR) vehicle detection problem. PMID:18249613

Weber, D M; Casasent, D P

2001-01-01

305

A survey of convergence results on particle filtering methods for practitioners

Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closed-form expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to. solve the optimal filtering problem numerically. The posterior distribution of the state is approximated by a large

Dan Crisan; Arnaud Doucet

2002-01-01

306

Recursive Implementations of the Consider Filter

NASA Technical Reports Server (NTRS)

One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

Zanetti, Renato; DSouza, Chris

2012-01-01

307

Modeling Filter Bypass: Impact on Filter Efficiency

Current models and test methods for determining filter efficiency ignore filter bypass, the air that circumvents filter media because of gaps around the filter or filter housing. In this paper, we develop a general model to estimate the size-resolved particle removal efficiency, including bypass, of HVAC filters. The model applies the measured pressure drop of the filter to determine the

Matthew Ward; Jeffrey Siegel

308

Aluminum hydroxide fibers approximately 2 nanometers in diameter and with surface areas ranging from 200 to 650 m.sup.2\\/g have been found to be highly electropositive. When dispersed in water they are able to attach to and retain electronegative particles. When combined into a composite filter with other fibers or particles they can filter bacteria and nano size particulates such as

Frederick Tepper; Leonid Kaledin

2009-01-01

309

NSDL National Science Digital Library

In this engineering activity, challenge learners to invent a water filter that cleans dirty water. Learners construct a filter device out of a 2-liter bottle and then experiment with different materials like gravel, sand, and cotton balls to see which is the most effective.

Safety note: An adult's help is needed for this activity.

WGBH Boston

2002-01-01

310

New filter efficiency test for future nuclear grade HEPA filters

We have developed a new test procedure for evaluating filter penetrations as low as 10/sup /minus/9/ at 0.1-..mu..m particle diameter. In comparison, the present US nuclear filter certification test has a lower penetration limit of 10/sup /minus/5/. Our new test procedure is unique not only in its much higher sensitivity, but also in avoiding the undesirable effect of clogging the filter. Our new test procedure consists of a two-step process: (1) We challenge the test filter with a very high concentration of heterodisperse aerosol for a short time while passing all or a significant portion of the filtered exhaust into an inflatable bag; (2) We then measure the aerosol concentration in the bag using a new laser particle counter sensitive to 0.07-..mu..m diameter. The ratio of particle concentration in the bag to the concentration challenging the filter gives the filter penetration as a function of particle diameter. The bad functions as a particle accumulator for subsequent analysis to minimize the filter exposure time. We have studied the particle losses in the bag over time and find that they are negligible when the measurements are taken within one hour. We also compared filter penetration measurements taken in the conventional direct-sampling method with the indirect bag-sampling method and found excellent agreement. 6 refs., 18 figs., 1 tab.

Bergman, W.; Foiles, L.; Mariner, C.; Kincy, M.

1988-08-17

311

Aquatic Plants Aid Sewage Filter

NASA Technical Reports Server (NTRS)

Method of wastewater treatment combines micro-organisms and aquatic plant roots in filter bed. Treatment occurs as liquid flows up through system. Micro-organisms, attached themselves to rocky base material of filter, act in several steps to decompose organic matter in wastewater. Vascular aquatic plants (typically, reeds, rushes, cattails, or water hyacinths) absorb nitrogen, phosphorus, other nutrients, and heavy metals from water through finely divided roots.

Wolverton, B. C.

1985-01-01

312

Filtering in SPECT Image Reconstruction

Single photon emission computed tomography (SPECT) imaging is widely implemented in nuclear medicine as its clinical role in the diagnosis and management of several diseases is, many times, very helpful (e.g., myocardium perfusion imaging). The quality of SPECT images are degraded by several factors such as noise because of the limited number of counts, attenuation, or scatter of photons. Image filtering is necessary to compensate these effects and, therefore, to improve image quality. The goal of filtering in tomographic images is to suppress statistical noise and simultaneously to preserve spatial resolution and contrast. The aim of this work is to describe the most widely used filters in SPECT applications and how these affect the image quality. The choice of the filter type, the cut-off frequency and the order is a major problem in clinical routine. In many clinical cases, information for specific parameters is not provided, and findings cannot be extrapolated to other similar SPECT imaging applications. A literature review for the determination of the mostly used filters in cardiac, brain, bone, liver, kidneys, and thyroid applications is also presented. As resulting from the overview, no filter is perfect, and the selection of the proper filters, most of the times, is done empirically. The standardization of image-processing results may limit the filter types for each SPECT examination to certain few filters and some of their parameters. Standardization, also, helps in reducing image processing time, as the filters and their parameters must be standardised before being put to clinical use. Commercial reconstruction software selections lead to comparable results interdepartmentally. The manufacturers normally supply default filters/parameters, but these may not be relevant in various clinical situations. After proper standardisation, it is possible to use many suitable filters or one optimal filter. PMID:21760768

Lyra, Maria; Ploussi, Agapi

2011-01-01

313

Design of Weighted Order Statistic Filters Using the Perceptron Algorithm

Design of Weighted Order Statistic Filters Using the Perceptron Algorithm by Byeongjang Jeong on this observation, the perceptron algorithm is applied to design WOS filters. It is shown, through experiments, that the perceptron algorithm can find optimal or near optimal WOS filters in practical situations. 1. INTRODUCTION

Lee, Yong Hoon

314

Initial Ares I Bending Filter Design

NASA Technical Reports Server (NTRS)

The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.

Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark

2007-01-01

315

A Rank-Ordered Marginal Filter for Deinterlacing

This paper proposes a new interpolation filter for deinterlacing, which is achieved by enhancing the edge preserving ability of the conventional edge-based line average methods. This filter consists of three steps: pre-processing step, fuzzy metric-based weight assignation step, and rank-ordered marginal filter step. The proposed method is able to interpolate the missing lines without introducing annoying articles. Simulation results show that the images filtered with the proposed algorithm restrain less annoying pixels than the ones acquired by other methods. PMID:23459388

Jeon, Gwanggil; Anisetti, Marco; Kang, Seok Hoon

2013-01-01

316

NASA Astrophysics Data System (ADS)

Over the past decade of nanotube research, a variety of organized nanotube architectures have been fabricated using chemical vapour deposition. The idea of using nanotube structures in separation technology has been proposed, but building macroscopic structures that have controlled geometric shapes, density and dimensions for specific applications still remains a challenge. Here we report the fabrication of freestanding monolithic uniform macroscopic hollow cylinders having radially aligned carbon nanotube walls, with diameters and lengths up to several centimetres. These cylindrical membranes are used as filters to demonstrate their utility in two important settings: the elimination of multiple components of heavy hydrocarbons from petroleum-a crucial step in post-distillation of crude oil-with a single-step filtering process, and the filtration of bacterial contaminants such as Escherichia coli or the nanometre-sized poliovirus (~25 nm) from water. These macro filters can be cleaned for repeated filtration through ultrasonication and autoclaving. The exceptional thermal and mechanical stability of nanotubes, and the high surface area, ease and cost-effective fabrication of the nanotube membranes may allow them to compete with ceramic- and polymer-based separation membranes used commercially.

Srivastava, A.; Srivastava, O. N.; Talapatra, S.; Vajtai, R.; Ajayan, P. M.

2004-09-01

317

Fuel And Oxidizer Filters For The Galileo Spacecraft

NASA Technical Reports Server (NTRS)

Report describes experimental and theoretical studies of filters in propellant streams of propulsion system in Galileo spacecraft. Studies contributed to base of information useful in optimizing design of filters in propulsion systems of future spacecraft.

Jan, Darrell L.; Guernsey, Carl S.; Callas, John L.

1993-01-01

318

Testing Dual Rotary Filters - 12373

The Savannah River National Laboratory (SRNL) installed and tested two hydraulically connected SpinTek{sup R} Rotary Micro-filter units to determine the behavior of a multiple filter system and develop a multi-filter automated control scheme. Developing and testing the control of multiple filters was the next step in the development of the rotary filter for deployment. The test stand was assembled using as much of the hardware planned for use in the field including instrumentation and valving. The control scheme developed will serve as the basis for the scheme used in deployment. The multi filter setup was controlled via an Emerson DeltaV control system running version 10.3 software. Emerson model MD controllers were installed to run the control algorithms developed during this test. Savannah River Remediation (SRR) Process Control Engineering personnel developed the software used to operate the process test model. While a variety of control schemes were tested, two primary algorithms provided extremely stable control as well as significant resistance to process upsets that could lead to equipment interlock conditions. The control system was tuned to provide satisfactory response to changing conditions during the operation of the multi-filter system. Stability was maintained through the startup and shutdown of one of the filter units while the second was still in operation. The equipment selected for deployment, including the concentrate discharge control valve, the pressure transmitters, and flow meters, performed well. Automation of the valve control integrated well with the control scheme and when used in concert with the other control variables, allowed automated control of the dual rotary filter system. Experience acquired on a multi-filter system behavior and with the system layout during this test helped to identify areas where the current deployment rotary filter installation design could be improved. Completion of this testing provides the necessary information on the control and system behavior that will be used in deployment on actual waste. (authors)

Herman, D.T.; Fowley, M.D.; Stefanko, D.B. [Savannah River National Laboratory (United States); Shedd, D.A.; Houchens, C.L. [Savannah River Remediation, Savannah River Site, Aiken, SC 29808 (United States)

2012-07-01

319

An insert which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment.

Sutton, George P. (Danville, CA)

1998-01-01

320

An insert is described which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment. 5 figs.

Sutton, G.P.

1998-07-14

321

The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO3, MgSO4, and CaCl2. The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO3, 12.71; MgSO4, 0.96; and CaCl2, 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor. PMID:22949884

Cui, Fengjie; Zhao, Liming

2012-01-01

322

CONVEX OPTIMIZATION & EUCLIDEAN DISTANCE GEOMETRY DATTORRO M #12;Dattorro CONVEX OPTIMIZATION & EUCLIDEAN DISTANCE GEOMETRY Meboo #12;Convex Optimization & Euclidean Distance Geometry Jon Dattorro Moo & Euclidean Distance Geometry, Moo, 2005, v2014.04.08. ISBN 0976401304 (English) ISBN 9780615193687

Stanford University

323

We present a power efficient DC to DC Converter to step down unregulated DC voltage source of 2.7 - 3.6 V to the regulated 1.8 V DC. The DC to DC Converter, constituted here, is designed for the load current range of 0 to 100 mA. It offers the output voltage ripple and the steady state error less than 1%

Prajakta Panse; T. Laxminidhi

2011-01-01

324

Multi-step heat treatments comprise of high temperature forming (150 Degree-Sign C/24 h plus 190 Degree-Sign C for several minutes) and subsequent low temperature forming (120 Degree-Sign C for 24 h) is developed in creep age forming of 7075 aluminum alloy to decrease springback and exfoliation corrosion susceptibility without reduction in tensile properties. The results show that the multi-step heat treatment gives the low springback and the best combination of exfoliation corrosion resistance and tensile strength. The lower springback is attributed to the dislocation recovery and more stress relaxation at higher temperature. Transmission electron microscopy observations show that corrosion resistance is improved due to the enlargement in the size and the inter-particle distance of the grain boundaries precipitates. Furthermore, the achievement of the high strength is related to the uniform distribution of ultrafine {eta} Prime precipitates within grains. - Highlights: Black-Right-Pointing-Pointer Creep age forming developed for manufacturing of aircraft wing panels by aluminum alloy. Black-Right-Pointing-Pointer A good combination of properties with minimal springback is required in this component. Black-Right-Pointing-Pointer This requirement can be improved through the appropriate heat treatments. Black-Right-Pointing-Pointer Multi-step cycles developed in creep age forming of AA7075 for improving of springback and properties. Black-Right-Pointing-Pointer Results indicate simultaneous enhancing the properties and shape accuracy (lower springback).

Arabi Jeshvaghani, R.; Zohdi, H. [Department of Materials Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of)] [Department of Materials Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of); Shahverdi, H.R., E-mail: shahverdi@modares.ac.ir [Department of Materials Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of); Bozorg, M. [Department of Materials Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of)] [Department of Materials Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of); Hadavi, S.M.M. [School of Materials Science and Engineering, MA University of Technology, P.O. Box 16765-3197, Tehran (Iran, Islamic Republic of)] [School of Materials Science and Engineering, MA University of Technology, P.O. Box 16765-3197, Tehran (Iran, Islamic Republic of)

2012-11-15

325

Particle Filters for State Estimation of Jump Markov Linear Systems

Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively com- pute optimal state estimates for this class of systems. We present efficient simulation-based algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixed-lag smoothing problem.

Arnaud Doucet; Neil J. Gordon; Vikram Krishnamurthy

1999-01-01

326

Particle filters for state estimation of jump Markov linear systems

Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulation-based algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixed-lag smoothing problem. Our

Arnaud Doucet; Neil J. Gordon; Vikram Krishnamurthy

2001-01-01

327

Improving retrieval rates for retrievable inferior vena cava filters.

The introduction of retrievable inferior vena cava (IVC) filters was an important step in the evolution of deep vein thrombosis/pulmonary embolism management. Their removability makes them preferred to permanent filters in many cases. IVC filter retrieval often occurs at a suboptimal rate, leading to complications associated with long-term placement. Improving retrievability includes solutions for patients being lost to follow-up, filter malpositioning, need arising for permanent IVC filtration, filtration requiring longer than the filter's window of retrievability, and filter compromise by the presence of a large trapped clot. This review explores these strategies for retrieval in detail in hopes of improving IVC filter retrieval rates. PMID:23278230

Dixon, Austin; Stavropoulos, S William

2013-01-01

328

An air filter is described that has a counter rotating drum, i.e., the rotation of the drum is opposite the tangential intake of air. The intake air has about 1 lb of rock wool fibers per 107 cu. ft. of air sometimes at about 100% relative humidity. The fibers are doffed from the drum by suction nozzle which are adjacent to the drum at the bottom of the filter housing. The drum screen is cleaned by periodically jetting hot dry air at 120 psig through the screen into the suction nozzles.

Jackson, R.E.; Sparks, J.E.

1981-03-03

329

NASA Technical Reports Server (NTRS)

Seeking to find a more effective method of filtering potable water that was highly contaminated, Mike Pedersen, founder of Western Water International, learned that NASA had conducted extensive research in methods of purifying water on board manned spacecraft. The key is Aquaspace Compound, a proprietary WWI formula that scientifically blends various types of glandular activated charcoal with other active and inert ingredients. Aquaspace systems remove some substances; chlorine, by atomic adsorption, other types of organic chemicals by mechanical filtration and still others by catalytic reaction. Aquaspace filters are finding wide acceptance in industrial, commercial, residential and recreational applications in the U.S. and abroad.

1988-01-01

330

An approach to the approximation problem for nonrecursive digital filters

A direct design procedure for nonrecursive digital filters, based primarily on the frequency-response characteristic of the desired filters, is presented. An optimization technique is used to minimize the maximum deviation of the synthesized filter from the ideal filter over some frequence range. Using this frequency-sampling technique, a wide variety of low-pass and bandpass filters have been designed, as well as

LAWRENCE R. RABINER; BERNARD GOLD; C. McGonegal

1970-01-01

331

Method and system for training dynamic nonlinear adaptive filters which have embedded memory

NASA Technical Reports Server (NTRS)

Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

Rabinowitz, Matthew (Inventor)

2002-01-01

332

Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

NASA Astrophysics Data System (ADS)

We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

Bruno, Marcelo G. S.; Dias, Stiven S.

2014-12-01

333

Accurate stereo matching by two-step energy minimization.

In stereo matching, cost-filtering methods and energy-minimization algorithms are considered as two different techniques. Due to their global extent, energy-minimization methods obtain good stereo matching results. However, they tend to fail in occluded regions, in which cost-filtering approaches obtain better results. In this paper, we intend to combine both the approaches with the aim to improve overall stereo matching results.We show that a global optimization with a fully connected model can be solved by cost-filtering methods. Based on this observation, we propose to perform stereo matching as a two-step energy-minimization algorithm. We consider two Markov random field (MRF) models: 1) a fully connected model defined on the complete set of pixels in an image and 2) a conventional locally connected model. We solve the energy-minimization problem for the fully connected model, after which the marginal function of the solution is used as the unary potential in the locally connected MRF model. Experiments on the Middlebury stereo data sets show that the proposed method achieves the state-of-the-arts results. PMID:25622319

Mozerov, Mikhail G; van de Weijer, Joost

2015-03-01

334

The intractable cigarette ‘filter problem’

Background When lung cancer fears emerged in the 1950s, cigarette companies initiated a shift in cigarette design from unfiltered to filtered cigarettes. Both the ineffectiveness of cigarette filters and the tobacco industry's misleading marketing of the benefits of filtered cigarettes have been well documented. However, during the 1950s and 1960s, American cigarette companies spent millions of dollars to solve what the industry identified as the ‘filter problem’. These extensive filter research and development efforts suggest a phase of genuine optimism among cigarette designers that cigarette filters could be engineered to mitigate the health hazards of smoking. Objective This paper explores the early history of cigarette filter research and development in order to elucidate why and when seemingly sincere filter engineering efforts devolved into manipulations in cigarette design to sustain cigarette marketing and mitigate consumers' concerns about the health consequences of smoking. Methods Relevant word and phrase searches were conducted in the Legacy Tobacco Documents Library online database, Google Patents, and media and medical databases including ProQuest, JSTOR, Medline and PubMed. Results 13 tobacco industry documents were identified that track prominent developments involved in what the industry referred to as the ‘filter problem’. These reveal a period of intense focus on the ‘filter problem’ that persisted from the mid-1950s to the mid-1960s, featuring collaborations between cigarette producers and large American chemical and textile companies to develop effective filters. In addition, the documents reveal how cigarette filter researchers' growing scientific knowledge of smoke chemistry led to increasing recognition that filters were unlikely to offer significant health protection. One of the primary concerns of cigarette producers was to design cigarette filters that could be economically incorporated into the massive scale of cigarette production. The synthetic plastic cellulose acetate became the fundamental cigarette filter material. By the mid-1960s, the meaning of the phrase ‘filter problem’ changed, such that the effort to develop effective filters became a campaign to market cigarette designs that would sustain the myth of cigarette filter efficacy. Conclusions This study indicates that cigarette designers at Philip Morris, British-American Tobacco, Lorillard and other companies believed for a time that they might be able to reduce some of the most dangerous substances in mainstream smoke through advanced engineering of filter tips. In their attempts to accomplish this, they developed the now ubiquitous cellulose acetate cigarette filter. By the mid-1960s cigarette designers realised that the intractability of the ‘filter problem’ derived from a simple fact: that which is harmful in mainstream smoke and that which provides the smoker with ‘satisfaction’ are essentially one and the same. Only in the wake of this realisation did the agenda of cigarette designers appear to transition away from mitigating the health hazards of smoking and towards the perpetuation of the notion that cigarette filters are effective in reducing these hazards. Filters became a marketing tool, designed to keep and recruit smokers as consumers of these hazardous products. PMID:21504917

2011-01-01

335

Tom Kehler, fishery biologist at the U.S. Fish and Wildlife Service's Northeast Fishery Center in Lamar, Pennsylvania, checks the flow rate of water leaving a phosphorus filter column. The USGS has pioneered a new use for acid mine drainage residuals that are currently a disposal challenge, usi...

336

NASA Astrophysics Data System (ADS)

Landslide inventory maps are fundamental for assessing landslide susceptibility, hazard, and risk. In tropical mountainous environments, mapping landslides is difficult as rapid and dense vegetation growth obscures landslides soon after their occurrence. Airborne laser scanning (ALS) data have been used to construct the digital terrain model (DTM) under dense vegetation, but its reliability for landslide recognition in the tropics remains surprisingly unknown. This study evaluates the suitability of ALS for generating an optimal DTM for mapping landslides in the Cameron Highlands, Malaysia. For the bare-earth extraction, we used hierarchical robust filtering algorithm and a parameterization with three sequential filtering steps. After each filtering step, four interpolations techniques were applied, namely: (i) the linear prediction derived from the SCOP++ (SCP), (ii) the inverse distance weighting (IDW), (iii) the natural neighbor (NEN) and (iv) the topo-to-raster (T2R). We assessed the quality of 12 DTMs in two ways: (1) with respect to 448 field-measured terrain heights and (2) based on the interpretability of landslides. The lowest root-mean-square error (RMSE) was 0.89 m across the landscape using three filtering steps and linear prediction as interpolation method. However, we found that a less stringent DTM filtering unveiled more diagnostic micro-morphological features, but also retained some of vegetation. Hence, a combination of filtering steps is required for optimal landslide interpretation, especially in forested mountainous areas. IDW was favored as the interpolation technique because it combined computational times more reasonably without adding artifacts to the DTM than T2R and NEN, which performed relatively well in the first and second filtering steps, respectively. The laser point density and the resulting ground point density after filtering are key parameters for producing a DTM applicable to landslide identification. The results showed that the ALS-derived DTMs allowed mapping and classifying landslides beneath equatorial mountainous forests, leading to a better understanding of hazardous geomorphic problems in tropical regions.

Razak, Khamarrul Azahari; Santangelo, Michele; Van Westen, Cees J.; Straatsma, Menno W.; de Jong, Steven M.

2013-05-01

337

Decentralized game-theoretic filters

Both continuous-time and discrete-time optimal decentralized game-theoretic filters (DGF) involving a K node interconnected network, where there is a sensor at each node, are derived. In the continuous-time case, a disturbance attenuation problem is solved to obtain the optimal DGF. In a contrasting approach, the solution to a stochastic formulation based on minimizing the expected value of an exponential of

Jinsheng Jang; Jason L. Speyer

1994-01-01

338

Wavelets meet Burgulence : CVS-filtered Burgers equation

Wavelets meet Burgulence : CVS-filtered Burgers equation Romain Nguyen van yen,1 Marie Farge,1 Burgers equation show that filtering the solution at each time step in a way similar to CVS (Coherent Vortex Simulation) gives the solution of the viscous Burgers equation. The CVS filter used here is based

Kingsbury, Nick

339

Sub-wavelength efficient polarization filter (SWEP filter)

A polarization sensitive filter includes a first sub-wavelength resonant grating structure (SWS) for receiving incident light, and a second SWS. The SWS are disposed relative to one another such that incident light which is transmitted by the first SWS passes through the second SWS. The filter has a polarization sensitive resonance, the polarization sensitive resonance substantially reflecting a first polarization component of incident light while substantially transmitting a second polarization component of the incident light, the polarization components being orthogonal to one another. A method for forming polarization filters includes the steps of forming first and second SWS, the first and second SWS disposed relative to one another such that a portion of incident light applied to the first SWS passes through the second SWS. A method for separating polarizations of light, includes the steps of providing a filter formed from a first and second SWS, shining incident light having orthogonal polarization components on the first SWS, and substantially reflecting one of the orthogonal polarization components while substantially transmitting the other orthogonal polarization component. A high Q narrowband filter includes a first and second SWS, the first and second SWS are spaced apart a distance being at least one half an optical wavelength.

Simpson, Marcus L.; Simpson, John T.

2003-12-09

340

Differential Cultural Algorithm for Digital Filters Design

FIR and IIR digital filters design involve multi-parameter optimization, on which some existing intelligent algorithms don't work efficiently. This paper focuses on employing the proposed differential cultural (DC) algorithm to design FIR and IIR digital filters. DC is a global stochastic searching technique that can find out the global optima of the problem more rapidly. After describing the theory and

Hongyuan Gao; Ming Diao

2010-01-01

341

Aim: The aim of this study was to examine and evaluate crucial variables in essential oils extraction process from Lavandula hybrida through static-dynamic and semi-continuous techniques using response surface method. Materials and Methods: Essential oil components were extracted from Lavandula hybrida (Lavandin) flowers using supercritical carbon dioxide via static-dynamic steps (SDS) procedure, and semi-continuous (SC) technique. Results: Using response surface method the optimum extraction yield (4.768%) was obtained via SDS at 108.7 bar, 48.5°C, 120 min (static: 8×15), 24 min (dynamic: 8×3 min) in contrast to the 4.620% extraction yield for the SC at 111.6 bar, 49.2°C, 14 min (static), 121.1 min (dynamic). Conclusion: The results indicated that a substantial reduction (81.56%) solvent usage (kg CO2/g oil) is observed in the SDS method versus the conventional SC method. PMID:25598636

Kamali, Hossein; Aminimoghadamfarouj, Noushin; Golmakani, Ebrahim; Nematollahi, Alireza

2015-01-01

342

Bayesian Filtering: From Kalman Filters to Particle Filters, and Beyond

In this self-contained survey\\/review paper, we system- atically investigate the roots of Bayesian filtering as well as its rich leaves in the literature. Stochastic filtering theory is briefly reviewed with emphasis on nonlinear and non-Gaussian filtering. Following the Bayesian statistics, different Bayesian filtering techniques are de- veloped given different scenarios. Under linear quadratic Gaussian circumstance, the celebrated Kalman filter can

ZHE CHEN

343

NASA Technical Reports Server (NTRS)

A compact, lightweight electrolytic water sterilizer available through Ambassador Marketing, generates silver ions in concentrations of 50 to 100 parts per billion in water flow system. The silver ions serve as an effective bactericide/deodorizer. Tap water passes through filtering element of silver that has been chemically plated onto activated carbon. The silver inhibits bacterial growth and the activated carbon removes objectionable tastes and odors caused by addition of chlorine and other chemicals in municipal water supply. The three models available are a kitchen unit, a "Tourister" unit for portable use while traveling and a refrigerator unit that attaches to the ice cube water line. A filter will treat 5,000 to 10,000 gallons of water.

1982-01-01

344

Metal films perforated with subwavelength hole arrays have been show to demonstrate an effect known as Extraordinary Transmission (EOT). In EOT devices, optical transmission passbands arise that can have up to 90% transmission and a bandwidth that is only a few percent of the designed center wavelength. By placing a tunable dielectric in proximity to the EOT mesh, one can tune the center frequency of the passband. We have demonstrated over 1 micron of passive tuning in structures designed for an 11 micron center wavelength. If a suitable midwave (3-5 micron) tunable dielectric (perhaps BaTiO{sub 3}) were integrated with an EOT mesh designed for midwave operation, it is possible that a fast, voltage tunable, low temperature filter solution could be demonstrated with a several hundred nanometer passband. Such an element could, for example, replace certain components in a filter wheel solution.

Passmore, Brandon Scott; Shaner, Eric Arthur; Barrick, Todd A.

2009-09-01

345

Filter for biomedical imaging and image processing

NASA Astrophysics Data System (ADS)

Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

Mondal, Partha P.; Rajan, K.; Ahmad, Imteyaz

2006-07-01

346

NSDL National Science Digital Library

This lesson from Illuminations looks at exponential decay. The example of how kidneys filter blood is used. The material asks students to determine the amount of a drug that remains in the body over a period of time. Students will predict behavior by an exponential decay model and graph an exponential set of data. The lesson is appropriate for grades 9-12 and should require 1 class period to complete.

347

A novel method for the multiobjective optimization of arbitrary planar array excitations is presented. The optimization problem formulation, which inherently takes into account every array element pattern as well as all interelement couplings, is based on matrix-valued functions which are computed from the generalized-scattering-matrix characterization of an array and spherical mode expansions of its radiated field. It allows the maximization

Juan Corcoles; Miguel A. Gonzalez; Jesús Rubio

2009-01-01

348

Treadmill stimulation improves newborn stepping.

To shed further light on infant stepping, we investigated whether newborns could step on a treadmill and adapt their steps to graded velocities. Twenty-one newborns (mean?=?3 days) were supported for 60?s trials on a treadmill that was static or moved at 13.4, 17.2, or 23.4?cm/s. Video analysis revealed that newborns made more real steps than in-place "pumps" on the moving treadmill than on the static treadmill and made more real steps at 17.2 than 23.4?cm/s. While the treadmill had no effect on arousal, stepping increased and showed higher quality and coordination across conditions when infants were crying. These findings suggest that treadmill interventions currently used to promote the development of independent locomotion in infants at risk of delay could begin at birth. Further investigation is needed to establish the optimal conditions for newborn treadmill stepping and to specify how arousal affects step rate, quality, and coordination. © 2015 Wiley Periodicals, Inc. Dev Psychobiol 57: 247-254, 2015. PMID:25644966

Siekerman, Kim; Barbu-Roth, Marianne; Anderson, David I; Donnelly, Alan; Goffinet, François; Teulier, Caroline

2015-03-01

349

Stochastic resonance with matched filtering

Along with the development of interferometric gravitational wave detector, we enter into an epoch of gravitational wave astronomy, which will open a brand new window for astrophysics to observe our universe. Almost all of the data analysis methods in gravitational wave detection are based on matched filtering. Gravitational wave detection is a typical example of weak signal detection, and this weak signal is buried in strong instrument noise. So it seems attractable if we can take advantage of stochastic resonance. But unfortunately, almost all of the stochastic resonance theory is based on Fourier transformation and has no relation to matched filtering. In this paper we try to relate stochastic resonance to matched filtering. Our results show that stochastic resonance can indeed be combined with matched filtering for both periodic and non-periodic input signal. This encouraging result will be the first step to apply stochastic resonance to matched filtering in gravitational wave detection. In addition, based on matched filtering, we firstly proposed a novel measurement method for stochastic resonance which is valid for both periodic and non-periodic driven signal.

Li-Fang Li; Jian-Yang Zhu

2010-06-28

350

The present study aimed to optimize the procedure for coating electrospun poly(?-caprolactone) (PCL) fibers with a calcium phosphate (CP) layer in order to improve their potential as bone tissue engineering scaffold. In particular, attention was paid to the reproducibility of the procedure, the morphology of the coating, and the preservation of the porous structure of the scaffold. Ethanol dipping followed by an ultrasonic assisted hydrolysis of the fiber surface with sodium hydroxide solution efficiently activated the surface. The resulting reactive groups served as nucleation points for CP precipitation, induced by alternate dipping of the samples in calcium and phosphate rich solutions. By controlling the deposition, a reproducible thin layer of CP was grown onto the fiber surface. The deposited CP was identified as calcium-deficient apatite (CDHAp). Analysis of the cell viability, adhesion, and proliferation of MC3T3-E1 cells on untreated and CDHAp coated PCL scaffolds showed that the CDHAp coating enhanced the cell response, as the number of attached cells was higher in comparison to the untreated PCL and cells on the CDHAp coated samples showed similar morphologies as the ones found in the positive control. © 2014 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 103A: 511-524, 2015. PMID:24733786

Luickx, Nathalie; Van den Vreken, Natasja; D'Oosterlinck, Willem; Van der Schueren, Lien; Declercq, Heidi; De Clerck, Karen; Cornelissen, Maria; Verbeeck, Ronald

2015-02-01

351

Rocket noise filtering system using digital filters

NASA Technical Reports Server (NTRS)

A set of digital filters is designed to filter rocket noise to various bandwidths. The filters are designed to have constant group delay and are implemented in software on a general purpose computer. The Parks-McClellan algorithm is used. Preliminary tests are performed to verify the design and implementation. An analog filter which was previously employed is also simulated.

Mauritzen, David

1990-01-01

352

An analytical formula for the design of quadrature mirror filters

Quadrature mirror filters have an outstanding relevance in the implementation of filter banks for dividing the speech signal into frequency bands and for reconstructing it from these subbands. An analytical formula is given, which allows one to optimize the design of the basic low-pass FIR filter by means of a straight nonlinear minimization procedure.

GIANCARLO PIRANI; VALERIO ZINGARELLI

1984-01-01

353

Design of the J-PAS and J-PLUS filter systems

NASA Astrophysics Data System (ADS)

J-PAS (Javalambre-PAU Astrophysical Survey) is a Spanish-Brazilian collaboration to conduct an innovative photometric survey of more than 8000 square degrees of northern sky using a system of 57 filters, 54 narrow-band (FWHM=13.8 nm) filters continuously populating the spectrum between 370 to 920 nm with 10.0 nm steps, plus 3 broad-band filters. Together with the main J-PAS survey, the collaboration is carrying out J-PLUS (the Javalambre Photometric Local Universe Survey), an all-sky survey using a set of 12 carefully optimized broad- and narrow-band filters that will be used to perform the calibration tasks for the main survey. The J-PAS survey will be carried out using JPCam, a 14-CCD mosaic camera using the new e2v 9.2k-by-9.2k, 10?m pixel detectors, mounted on the JST/T250, a dedicated 2.55-m wide-field telescope at the Observatorio Astrofísico de Javalambre (OAJ) in Teruel, Spain. J-PLUS, on the other hand, will be carried out using a wide field CCD camera (the T80Cam) equipped with a large format STA 1600 CCD (10.5k-by-10.5k, 9?m pixel) and mounted on the JAST/T80, a dedicated 0.83-m wide-field telescope at the OAJ. In both cases, the filters will operate close to, but up-stream from the dewar window in a fast converging optical beam. This optical configuration imposes challenging requirements for the J-PLUS and J-PAS filters, some of them requiring the development of new filter design solutions. This paper describes the main requirements and design strategies for these two sets of filters.

Marín-Franch, A.; Chueca, S.; Moles, M.; Benitez, N.; Taylor, K.; Cepa, J.; Cenarro, A. J.; Cristobal-Hornillos, D.; Ederoclite, A.; Gruel, N.; Hernández-Fuertes, J.; López-Sainz, A.; Luis-Simoes, R.; Rueda-Teruel, F.; Rueda-Teruel, S.; Varela, J.; Yanes-Díaz, A.; Brauneck, U.; Danielou, A.; Dupke, R.; Fernández-Soto, A.; Mendes de Oliveira, C.; Sodré, L.

2012-09-01

354

A pharmacokinetic-pharmacodynamic (PKPD) model that characterizes the full time course of in vitro time-kill curve experiments of antibacterial drugs was here evaluated in its capacity to predict the previously determined PK/PD indices. Six drugs (benzylpenicillin, cefuroxime, erythromycin, gentamicin, moxifloxacin, and vancomycin), representing a broad selection of mechanisms of action and PK and PD characteristics, were investigated. For each drug, a dose fractionation study was simulated, using a wide range of total daily doses given as intermittent doses (dosing intervals of 4, 8, 12, or 24 h) or as a constant drug exposure. The time course of the drug concentration (PK model) as well as the bacterial response to drug exposure (in vitro PKPD model) was predicted. Nonlinear least-squares regression analyses determined the PK/PD index (the maximal unbound drug concentration [fC(max)]/MIC, the area under the unbound drug concentration-time curve [fAUC]/MIC, or the percentage of a 24-h time period that the unbound drug concentration exceeds the MIC [fT(>MIC)]) that was most predictive of the effect. The in silico predictions based on the in vitro PKPD model identified the previously determined PK/PD indices, with fT(>MIC) being the best predictor of the effect for ?-lactams and fAUC/MIC being the best predictor for the four remaining evaluated drugs. The selection and magnitude of the PK/PD index were, however, shown to be sensitive to differences in PK in subpopulations, uncertainty in MICs, and investigated dosing intervals. In comparison with the use of the PK/PD indices, a model-based approach, where the full time course of effect can be predicted, has a lower sensitivity to study design and allows for PK differences in subpopulations to be considered directly. This study supports the use of PKPD models built from in vitro time-kill curves in the development of optimal dosing regimens for antibacterial drugs. PMID:21807983

Nielsen, Elisabet I; Cars, Otto; Friberg, Lena E

2011-10-01

355

Kalman Filter and Extended Kalman Filter Namrata Vaswani, namrata@iastate.edu

ar(xi|Yi-1) = FiPi-1FT i + Qi Pi|i-1 (3) This is the prediction step Filtering or correction step Filtering 5 #12;Summarizing the Kalman Filter ^xi|i-1 = Fi ^xi-1 Pi|i-1 = FiPi-1FT i + Qi Ki = Pi|i-1HT i the Extended KF Fi = fi x (^xi-1) ^xi|i-1 = fi(^xi-1) Pi|i-1 = FiPi-1FT i + Qi Hi = hi x (^xi|i-1) Ki = Pi|i-1

Vaswani, Namrata

356

ADVANCED HOT GAS FILTER DEVELOPMENT

DuPont Lanxide Composites, Inc. undertook a sixty-month program, under DOE Contract DEAC21-94MC31214, in order to develop hot gas candle filters from a patented material technology know as PRD-66. The goal of this program was to extend the development of this material as a filter element and fully assess the capability of this technology to meet the needs of Pressurized Fluidized Bed Combustion (PFBC) and Integrated Gasification Combined Cycle (IGCC) power generation systems at commercial scale. The principal objective of Task 3 was to build on the initial PRD-66 filter development, optimize its structure, and evaluate basic material properties relevant to the hot gas filter application. Initially, this consisted of an evaluation of an advanced filament-wound core structure that had been designed to produce an effective bulk filter underneath the barrier filter formed by the outer membrane. The basic material properties to be evaluated (as established by the DOE/METC materials working group) would include mechanical, thermal, and fracture toughness parameters for both new and used material, for the purpose of building a material database consistent with what is being done for the alternative candle filter systems. Task 3 was later expanded to include analysis of PRD-66 candle filters, which had been exposed to actual PFBC conditions, development of an improved membrane, and installation of equipment necessary for the processing of a modified composition. Task 4 would address essential technical issues involving the scale-up of PRD-66 candle filter manufacturing from prototype production to commercial scale manufacturing. The focus would be on capacity (as it affects the ability to deliver commercial order quantities), process specification (as it affects yields, quality, and costs), and manufacturing systems (e.g. QA/QC, materials handling, parts flow, and cost data acquisition). Any filters fabricated during this task would be used for product qualification tests being conducted by Westinghouse at Foster-Wheeler's Pressurized Circulating Fluidized Bed (PCFBC) test facility in Karhula, Finland. Task 5 was designed to demonstrate the improvements implemented in Task 4 by fabricating fifty 1.5-meter hot gas filters. These filters were to be made available for DOE-sponsored field trials at the Power Systems Development Facility (PSDF), operated by Southern Company Services in Wilsonville, Alabama.

E.S. Connolly; G.D. Forsythe

2000-09-30

357

Performance index: A method for quantitative evaluation of filters used in clinical SPECT

The purpose of this study was to design a method for optimal filter selection during the reconstruction of clinical SPECT images. Hamming, Bartlett, Parzen and Butterworth filters were evaluated at different cutoff frequencies when applied to reconstruction of the Jaszczak phantom and liver SPECTs. The phantom filled with 6 mCi of Tc-99m was imaged following 4 different protocols which varied in matrix sizes (128 x 128 or 64 x 64) and in number of steps (128 or 64). Total imaging time in the 4 protocols was 24 minutes. A total of 160 reconstructions were analyzed. Liver SPECTs from 2 patients with small metastatic lesions from colon Ca were similarly studied. An ECT Performance Index (ECT PI) was defined as the product of the contrast efficiency function (ECT C) and uniformity (ECT U). ECT C as a function of the radius was measured following Rollo's approach. ECT U was measured as the ratio between min. and max. counts per pixel in a known uniform region. ECT PI was computed on a slice through the void spheres region of the phantom. In liver SPECTs the ECT U was measured over the spleen. The most favorable ECT PI (0.35, radius 7.9 mm) was obtained with images in 128 x 128 matrices, 128 steps, processed with a Butterworth cutoff frequency of 0.19, filter order 4. When images were acquired in 64 x 64 matrices using 64 steps the ECT PI was lower and influenced to a lesser degree by both choice of filter and cutoff frequency. Results in the two liver SPECT examinations were parallel to those found in the phantom studies confirming the clinical usefulness of the ECT PI in the evaluation of filters for reconstruction of SPECT images.

Contino, J.; Touya, J.J.; Corbus, H.F.; Rahimian, J.

1984-01-01

358

Optical ranked-order filtering using threshold decomposition

A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

1987-10-09

359

Optical ranked-order filtering using threshold decomposition

A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

Allebach, Jan P. (West Lafayette, IN); Ochoa, Ellen (Pleasanton, CA); Sweeney, Donald W. (Alamo, CA)

1990-01-01

360

Quantum neural network-based EEG filtering for a brain-computer interface.

A novel neural information processing architecture inspired by quantum mechanics and incorporating the well-known Schrodinger wave equation is proposed in this paper. The proposed architecture referred to as recurrent quantum neural network (RQNN) can characterize a nonstationary stochastic signal as time-varying wave packets. A robust unsupervised learning algorithm enables the RQNN to effectively capture the statistical behavior of the input signal and facilitates the estimation of signal embedded in noise with unknown characteristics. The results from a number of benchmark tests show that simple signals such as dc, staircase dc, and sinusoidal signals embedded within high noise can be accurately filtered and particle swarm optimization can be employed to select model parameters. The RQNN filtering procedure is applied in a two-class motor imagery-based brain-computer interface where the objective was to filter electroencephalogram (EEG) signals before feature extraction and classification to increase signal separability. A two-step inner-outer fivefold cross-validation approach is utilized to select the algorithm parameters subject-specifically for nine subjects. It is shown that the subject-specific RQNN EEG filtering significantly improves brain-computer interface performance compared to using only the raw EEG or Savitzky-Golay filtered EEG across multiple sessions. PMID:24807028

Gandhi, Vaibhav; Prasad, Girijesh; Coyle, Damien; Behera, Laxmidhar; McGinnity, Thomas Martin

2014-02-01

361

Feasibility of nanofluid-based optical filters.

In this article we report recent modeling and design work indicating that mixtures of nanoparticles in liquids can be used as an alternative to conventional optical filters. The major motivation for creating liquid optical filters is that they can be pumped in and out of a system to meet transient needs in an application. To demonstrate the versatility of this new class of filters, we present the design of nanofluids for use as long-pass, short-pass, and bandpass optical filters using a simple Monte Carlo optimization procedure. With relatively simple mixtures, we achieve filters with <15% mean-squared deviation in transmittance from conventional filters. We also discuss the current commercial feasibility of nanofluid-based optical filters by including an estimation of today's off-the-shelf cost of the materials. While the limited availability of quality commercial nanoparticles makes it hard to compete with conventional filters, new synthesis methods and economies of scale could enable nanofluid-based optical filters in the near future. As such, this study lays the groundwork for creating a new class of selective optical filters for a wide range of applications, namely communications, electronics, optical sensors, lighting, photography, medicine, and many more. PMID:23458793

Taylor, Robert A; Otanicar, Todd P; Herukerrupu, Yasitha; Bremond, Fabienne; Rosengarten, Gary; Hawkes, Evatt R; Jiang, Xuchuan; Coulombe, Sylvain

2013-03-01

362

NASA Astrophysics Data System (ADS)

Image exploitation algorithms for Intelligence, Surveillance and Reconnaissance (ISR) and weapon systems are extremely sensitive to differences between the operating conditions (OCs) under which they are trained and the extended operating conditions (EOCs) in which the fielded algorithms are tested. As an example, terrain type is an important OC for the problem of tracking hostile vehicles from an airborne camera. A system designed to track cars driving on highways and on major city streets would probably not do well in the EOC of parking lots because of the very different dynamics. In this paper, we present a system we call ALPS for Adaptive Learning in Particle Systems. ALPS takes as input a sequence of video images and produces labeled tracks. The system detects moving targets and tracks those targets across multiple frames using a multiple hypothesis tracker (MHT) tightly coupled with a particle filter. This tracker exploits the strengths of traditional MHT based tracking algorithms by directly incorporating tree-based hypothesis considerations into the particle filter update and resampling steps. We demonstrate results in a parking lot domain tracking objects through occlusions and object interactions.

Stevens, Mark R.; Gutchess, Dan; Checka, Neal; Snorrason, Magnús

2006-05-01

363

Purpose: Reducing patient dose while maintaining (or even improving) image quality is one of the foremost goals in CT imaging. To this end, we consider the feasibility of optimizing CT scan protocols in conjunction with the application of different beam-hardening filtrations and assess this augmentation through noise-power spectrum (NPS) and detector quantum efficiency (DQE) analysis. Methods: American College of Radiology (ACR) and Catphan phantoms (The Phantom Laboratory) were scanned with a 64 slice CT scanner when additional filtration of thickness and composition (e.g., copper, nickel, tantalum, titanium, and tungsten) had been applied. A MATLAB-based code was employed to calculate the image of noise NPS. The Catphan Image Owl software suite was then used to compute the modulated transfer function (MTF) responses of the scanner. The DQE for each additional filter, including the inherent filtration, was then computed from these values. Finally, CT dose index (CTDIvol) values were obtained for each applied filtration through the use of a 100 mm pencil ionization chamber and CT dose phantom. Results: NPS, MTF, and DQE values were computed for each applied filtration and compared to the reference case of inherent beam-hardening filtration only. Results showed that the NPS values were reduced between 5 and 12% compared to inherent filtration case. Additionally, CTDIvol values were reduced between 15 and 27% depending on the composition of filtration applied. However, no noticeable changes in image contrast-to-noise ratios were noted. Conclusion: The reduction in the quanta noise section of the NPS profile found in this phantom-based study is encouraging. The reduction in both noise and dose through the application of beam-hardening filters is reflected in our phantom image quality. However, further investigation is needed to ascertain the applicability of this approach to reducing patient dose while maintaining diagnostically acceptable image qualities in a clinical setting.

Collier, J; Aldoohan, S; Gill, K

2014-06-01

364

Step graded buffer for (110) InSb quantum wells grown by molecular beam epitaxy

NASA Astrophysics Data System (ADS)

We report on a two step buffer layer preparation for the growth of InSb quantum wells on a (110) GaAs surface. At each buffer layer step, layer conditions were optimized to produce smooth surfaces compatible with InSb quantum wells. Through varying growth rate, group V/III flux ratio, substrate temperature, and the addition of in situ annealing, we are able to grow In0.85Al0.15Sb on a GaAs substrate with an RMS surface roughness of approximately 2 nm. Surface morphology and cross-sectional transmission electron microscopy (TEM) were analyzed to understand the formation of threading dislocations, inclusions and dislocation filtering. This work presents an initial study for the growth of large lattice mismatched III-V materials on the (110) surface.

Podpirka, Adrian A.; Twigg, Mark E.; Tischler, Joseph G.; Magno, Richard; Bennett, Brian R.

2014-10-01

365

Genetically Engineered Microelectronic Infrared Filters

NASA Technical Reports Server (NTRS)

A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.

Cwik, Tom; Klimeck, Gerhard

1998-01-01

366

Robust INS\\/GPS Sensor Fusion for UAV Localization Using SDRE Nonlinear Filtering

The aim of this paper is to present a new INS\\/GPS sensor fusion scheme, based on state-dependent Riccati equation (SDRE) nonlinear filtering, for unmanned aerial vehicle (UAV) localization problem. SDRE navigation filter is proposed as an alternative to extended Kalman filter (EKF), which has been largely used in the literature. Based on optimal control theory, SDRE filter solves issues linked

Abdelkrim Nemra; Nabil Aouf

2010-01-01

367

Effects of electron beam irradiation of cellulose acetate cigarette filters

NASA Astrophysics Data System (ADS)

A method to reduce the molecular weight of cellulose acetate used in cigarette filters by using electron beam irradiation is demonstrated. Radiation levels easily obtained with commercially available electron accelerators result in a decrease in average molecular weight of about six-times with no embrittlement, or significant change in the elastic behavior of the filter. Since a first step in the biodegradation of cigarette filters is reduction in the filter material's molecular weight this invention has the potential to allow the production of significantly faster degrading filters.

Czayka, M.; Fisch, M.

2012-07-01

368

Laboratory comparison of continuous vs. binary phase-mostly filters

NASA Technical Reports Server (NTRS)

Recent developments in spatial light modulators have led to devices which are capable of continuous phase modulation, even if only over a limited range. One of these devices, the deformable mirror device is used, to compare the relative merits of binary and partially-continuous phase filters in a specific problem of pattern recognition by optical correlation. Each filter was physically limited to only about a radiation of modulation. Researchers have predicted that for low input noise levels, continuous phase-only filters should have a higher absolute correlator peak output than the corresponding binary filters, as well as having a larger SNR. When continuous and binary filters were implemented on the DMD and they exhibited the same performance; an ad hoc filter optimization procedure was developed for use in the laboratory. The optimized continuous filter gave higher correlation peaks than did an independently optimized binary filter. Background behavior in the correlation plane was similar for the two filters, and thus the SNR showed the same improvement for the continuous filter. A phasor diagram analysis and computer simulation have explained part of the optimization procedure's success.

Monroe, Stanley E., Jr.; Knopp, Jerome; Juday, Richard D.

1989-01-01

369

Synthetic discriminant estimating filter using complex constraints

We previously proposed and implemented a joint transform correlator (JTC) using an optimal trade-off synthetic discriminant function (OT-SDF) filter in order to provide in-plane rotation ivariance. We propose to improve that system by using what we call a synthetic discriminant estimating function (SDEF) filter which also estimates the object rotation angle (without degrading the discrimination capability) through modulating the phase

Laurent Bigue; Michel Fraces; Pierre Ambs

1995-01-01

370

Optimal edge-based shape detection.

We propose an approach to accurately detecting two-dimensional (2-D) shapes. The cross section of the shape boundary is modeled as a step function. We first derive a one-dimensional (1-D) optimal step edge operator, which minimizes both the noise power and the mean squared error between the input and the filter output. This operator is found to be the derivative of the double exponential (DODE) function, originally derived by Ben-Arie and Rao. We define an operator for shape detection by extending the DODE filter along the shape's boundary contour. The responses are accumulated at the centroid of the operator to estimate the likelihood of the presence of the given shape. This method of detecting a shape is in fact a natural extension of the task of edge detection at the pixel level to the problem of global contour detection. This simple filtering scheme also provides a tool for a systematic analysis of edge-based shape detection. We investigate how the error is propagated by the shape geometry. We have found that, under general assumptions, the operator is locally linear at the peak of the response. We compute the expected shape of the response and derive some of its statistical properties. This enables us to predict both its localization and detection performance and adjust its parameters according to imaging conditions and given performance specifications. Applications to the problem of vehicle detection in aerial images, human facial feature detection, and contour tracking in video are presented. PMID:18249692

Moon, Hankyu; Chellappa, Rama; Rosenfeld, Azriel

2002-01-01

371

ERIC Educational Resources Information Center

Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)

Stille, J. K.

1981-01-01

372

A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

Bourret, S.C.; Swansen, J.E.

1982-07-02

373

A stepping motor is microprocessingly controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

Bourret, Steven C. (Los Alamos, NM); Swansen, James E. (Los Alamos, NM)

1984-01-01

374

Visual pattern recognition using coupled filters

NASA Astrophysics Data System (ADS)

We discuss the use of an optical correlator with a highly coupled filter and dappled targets to track an object in a field of view cluttered by background noise and/or similar objects. The dappled targets are fractal images whose statistics are independent of scale. Each is unique for tracking the targets. We report the drop in correlation (hence recognition) of an object as a function of in-plane rotation and as a function of range. We discuss plans for an application in Johnson Space Center's Automation and Robotics group, in which correlation processing of these targets would distinguish an object and pass its position and orientation to a robot control system. Using MEDOF (minimum Euclidean distance optimal filter) to create filters on the coupled filter modulator, we show that background clutter can be optically filtered out.

Monroe, Stanley E., Jr.; Juday, Richard D.; Barton, R. Shane; Qin, Michael K.

1995-06-01

375

Polychromator filter design with genetic algorithm

NASA Astrophysics Data System (ADS)

In Thomson scattering (TS) diagnostics, polychromators are equipped with several optical band-pass filters which cover the spectral region where the radiation from the incident laser beam is expected to be Doppler shifted. The spectral location of the transmission band of individual filters has a strong influence on the measured electron temperature (Te) since the latter is derived from a previously computed lookup table including the spectral specifications of the filters. Here, we present the design of the set of polychromator filters through genetic algorithms (GAs). We examine the developed algorithm under two specific target conditions, and optimized filter sets covering the wavelength region longer than the wavelength of the incident laser seem to be more effective in improving the accuracy of the Te calculations provided by the diagnostic.

Oh, Seungtae; Park, Jiyoung

2015-02-01

376

A Bloom filter is a simple space-efficient randomized data structure for representing a set in order to support membership queries. Although Bloom filters allow false positives, for many applications the space savings outweigh this draw-back when the probability of an error is sufficiently low. We introduce compressed Bloom filters, which improve performance when the Bloom filter is passed as a

Michael Mitzenmacher

2001-01-01

377

This review summarizes the research progress made so far on electret air filters used for separation of airborne particles from complex air stream. A set of different categories of these filters are delineated and the methods of manufacturing of these filters are described. The principles and mechanisms of filtration and modeling of pressure drop by these filters are analyzed. The

Rashmi Thakur; Dipayan Das; Apurba Das

2012-01-01

378

Hepa filter dissolution process

A process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.

Brewer, Ken N. (Arco, ID); Murphy, James A. (Idaho Falls, ID)

1994-01-01

379

Recirculating electric air filter

An electric air filter cartridge has a cylindrical inner high voltage electrode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

Bergman, W.

1985-01-09

380

HEPA filter dissolution process

This invention is comprised of a process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.

Brewer, K.N.; Murphy, J.A.

1992-12-31

381

HEPA filter dissolution process

A process is described for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal. 4 figures.

Brewer, K.N.; Murphy, J.A.

1994-02-22

382

Inverse Variation: Step By Step Lesson

NSDL National Science Digital Library

This step by step lesson from the Math Ops website explains inverse variation. Students can read the text or follow along as it is read out loud. The lesson includes nine slides which explain what an inverse variation equation is, and include several real world examples of this type of mathematical model.

2012-08-29

383

PAF Changes Step-By-Step Procedure

Proposal Management Reviewer PAF Changes Step-By-Step Procedure Last updated: 4/1/2013 1 of 6 http://eresearch.umich.edu Reviewer - PAF Changes In the state of Unit Review, a Reviewer or a Reviewer Who Can Sign (Approver) can make and submit their own changes or request that the project team make and submit the changes. When

Shyy, Wei

384

Acknowledge Changes Step by Step Procedure

Proposal Management Reviewer Acknowledge Changes Step by Step Procedure Last updated: 03/20/09 1 of 4 http://eresearch.umich.edu Acknowledge Changes Acknowledge is used to confirm that you are aware of the PAF changes have been made after your approval, and that you do not wish to suspend your prior

Shyy, Wei

385

Properties of multilayer filters

NASA Technical Reports Server (NTRS)

New methods were investigated of using optical interference coatings to produce bandpass filters for the spectral region 110 nm to 200 nm. The types of filter are: triple cavity metal dielectric filters; all dielectric reflection filters; and all dielectric Fabry Perot type filters. The latter two types use thorium fluoride and either cryolite films or magnesium fluoride films in the stacks. The optical properties of the thorium fluoride were also measured.

Baumeister, P. W.

1973-01-01

386

Constrained Optimization using GA Proposed Optimization Flow

Constrained Optimization using GA Proposed Optimization Flow for Nano-CMOS VCO Design and Polynomial Modeling of VCO Fast Analog Design Optimization using Regression based Modeling and Genetic manual design step. At this stage a netlist is sufficient for the design flow. 50nm current starved VCO

Mohanty, Saraju P.

387

A semblance-guided median filter

A slowness selective median filter based on information from a local set of traces is described and implemented. The filter is constructed in two steps, the first being an estimation of a preferred slowness and the second, the selection of a median or trimmed mean value to replace the original data point. A symmetric window of traces defining the filter aperture is selected about each trace to be filtered and the filter applied repeatedly to each time point. The preferred slowness is determined by scanning a range of linear moveouts within the user-specified slowness passband. Semblance is computed for each trial slowness and the preferred slowness selected from the peak semblance value. Data points collected along this preferred slowness are then sorted from lowest to highest and in the case of a pure median filter, the middle point(s) selected to replace the original data point. This approach may be sued as a velocity filter to estimate coherent signal within a specified slowness passband and reject coherent energy outside this range. For applications of this type, other velocity estimators may be used in place of the authors semblance measure to provide improved velocity estimation and better filter performance. The filter aperture may also be extended to provide increased velocity estimation, but will result in additional lateral smearing of signal. The authors show that, in addition to a velocity filter, their approach may be used to improve signal-to-noise ratios in noisy data. The median filter tends to suppress the amplitude of random background noise and semblance weighting may be used to reduce the amplitude of background noise further while enhancing coherent signal.

Reiter, E.C. (New England Research Geoscience, Quincy, MA (United States)); Toksoz, M.N. (Massachusetts Inst. of Tech., Cambridge (United States)); Purdy, G.M. (Woods Hole Oceanographic Institution, MA (United States))

1993-01-01

388

NASA Astrophysics Data System (ADS)

Data Assimilation techniques have gained increasing popularity in the atmospheric, oceanographic and geophysics communities over the last two decades. Suboptimal algorithms of the Kalman filter approach, which is optimal in the linear case but completely infeasible for large-scale problem, have been developed to solve the data assimilation problems in these fields of application. The reduced rank square root (RRSQRT) filter is a special formulation of the Kalman filter for assimilation of data in large scale models. In this formulation, the covariance matrix of the model state is expressed in a small number of modes, stored in a lower rank square root matrix. The RRSQRT algorithm includes a reduction part that reduces the number of modes if it becomes too large in order to ensure that the filter problem is feasible. In the classical implementation some sort of normalisation of the square-root matrix is required in the reduction step when variables of different scales are considered in the model. A new and more appropriate truncation procedure based on the Lanczos decomposition algorithm is presented. According to this approach one completely avoids normalisation problems, and, even more, the new truncation step needs much less computational time than the original procedure. In addition, it includes a precision coefficient that can be tuned for specific applications depending on the trade-off between precision and computational load.

Treebushny, D.; Madsen, H.

2003-04-01

389

Stepped frequency ground penetrating radar

A stepped frequency ground penetrating radar system is described comprising an RF signal generating section capable of producing stepped frequency signals in spaced and equal increments of time and frequency over a preselected bandwidth which serves as a common RF signal source for both a transmit portion and a receive portion of the system. In the transmit portion of the system the signal is processed into in-phase and quadrature signals which are then amplified and then transmitted toward a target. The reflected signals from the target are then received by a receive antenna and mixed with a reference signal from the common RF signal source in a mixer whose output is then fed through a low pass filter. The DC output, after amplification and demodulation, is digitized and converted into a frequency domain signal by a Fast Fourier Transform. A plot of the frequency domain signals from all of the stepped frequencies broadcast toward and received from the target yields information concerning the range (distance) and cross section (size) of the target.

Vadnais, Kenneth G. (Ojai, CA); Bashforth, Michael B. (Buellton, CA); Lewallen, Tricia S. (Ventura, CA); Nammath, Sharyn R. (Santa Barbara, CA)

1994-01-01

390

2-Step IMAT and 2-Step IMRT in three dimensions

In two dimensions, 2-Step Intensity Modulated Arc Therapy (2-Step IMAT) and 2-Step Intensity Modulated Radiation Therapy (IMRT) were shown to be powerful methods for the optimization of plans with organs at risk (OAR) (partially) surrounded by a target volume (PTV). In three dimensions, some additional boundary conditions have to be considered to establish 2-Step IMAT as an optimization method. A further aim was to create rules for ad hoc adaptations of an IMRT plan to a daily changing PTV-OAR constellation. As a test model, a cylindrically symmetric PTV-OAR combination was used. The centrally placed OAR can adapt arbitrary diameters with different gap widths toward the PTV. Along the rotation axis the OAR diameter can vary, the OAR can even vanish at some axis positions, leaving a circular PTV. The width and weight of the second segment were the free parameters to optimize. The objective function f to minimize was the root of the integral of the squared difference of the dose in the target volume and a reference dose. For the problem, two local minima exist. Therefore, as a secondary criteria, the magnitude of hot and cold spots were taken into account. As a result, the solution with a larger segment width was recommended. From plane to plane for varying radii of PTV and OAR and for different gaps between them, different sets of weights and widths were optimal. Because only one weight for one segment shall be used for all planes (respectively leaf pairs), a strategy for complex three-dimensional (3-D) cases was established to choose a global weight. In a second step, a suitable segment width was chosen, minimizing f for this global weight. The concept was demonstrated in a planning study for a cylindrically symmetric example with a large range of different radii of an OAR along the patient axis. The method is discussed for some classes of tumor/organ at risk combinations. Noncylindrically symmetric cases were treated exemplarily. The product of width and weight of the additional segment as well as the integral across the segment profile was demonstrated to be an important value. This product was up to a factor of 3 larger than in the 2-D case. Even in three dimensions, the optimized 2-Step IMAT increased the homogeneity of the dose distribution in the PTV profoundly. Rules for adaptation to varying target-OAR combinations were deduced. It can be concluded that 2-Step IMAT and 2-Step IMRT are also applicable in three dimensions. In the majority of cases, weights between 0.5 and 2 will occur for the additional segment. The width-weight product of the second segment is always smaller than the normalized radius of the OAR. The width-weight product of the additional segment is strictly connected to the relevant diameter of the organ at risk and the target volume. The derived formulas can be helpful to adapt an IMRT plan to altering target shapes.

Bratengeier, Klaus [Klinik und Poliklinik fuer Strahlentherapie, Universitaet Wuerzburg, Josef-Schneider-Str. 11, D-97080 Wuerzburg (Germany)

2005-12-15

391

Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

NASA Technical Reports Server (NTRS)

An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

2012-01-01

392

A FILTER METHOD WITH UNIFIED STEP COMPUTATION FOR ...

“infeasible” interior-point methods avoid this defect to some degree. ?Research ... solving multiple quadratic programs during each iteration. .... are linear and quadratic model approximations, respectively, of the objective function f for a given.

2013-05-09

393

Reduction of turbidity by a coal-aluminium filter

Coal-aluminium granular filters successfully reduce turbidity in low-alkalinity raw waters to less than 1.0 ntu, without a coagulation step or external coagulant aids. Data from experiments conducted with control and pilot-plant filters show the viability of the process and indicate the turbidity and retention mechanisms. Operational characteristics of the process are similar to those of a conventional filter. The costs of the coal-aluminium process compare favourably with those of traditional treatment.

Collins, A.G.; Johnson, R.L.

1985-06-01

394

NASA Astrophysics Data System (ADS)

An improvement to the wavelet-modified Optimal Trade-off Maximum Average Correlation Height (OT-MACH) filter with the use of the Rayleigh distribution filter is proposed. The Rayleigh distribution filter is applied to the OT-MACH filter to provide a sharper low frequency cut-off than the Laplacian of Gaussian based wavelet filter that has been previously reported to enhance OT-MACH filter performance. Filters are trained using a 3D CAD model and tested on the corresponding real target object in high clutter environments acquired from a Forward Looking Infra Red (FLIR) sensor. Comparative evaluation of the performance of the original, wavelet and Rayleigh modified OT-MACH filter is reported for the recognition of the target objects present within the thermal infra-red image data set.

Alkandri, Ahmad; Bangalore, Nagachetan; Gardezi, Akber; Birch, Philip; Young, Rupert; Chatwin, Chris

2012-04-01

395

Low-complexity wavelet filter design for image compression

NASA Technical Reports Server (NTRS)

Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

Majani, E.

1994-01-01

396

Analysis Scheme in the Ensemble Kalman Filter

This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter. It is shown that the observations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the observations and generate an ensemble of observations that then is

Gerrit Burgers; Peter Jan van Leeuwen; Geir Evensen

1998-01-01

397

A new minimum mean square error optimal linear estimation problem is considered where no direct measurement of the output to be estimated is available. The optimal filter, predictor, and smoother are derived for this case where outputs must be inferred from available measurements. The results cover the usual Wiener or Kalman filtering problems and also optimal deconvolution estimation problems. However,

M. J. Grimble

1994-01-01

398

Canonical Signed Digit Study. Part 2; FIR Digital Filter Simulation Results

NASA Technical Reports Server (NTRS)

Finite Impulse Response digital filter using Canonical Signed-Digit (CSD) number representation for the coefficients has been studied and its computer simulation results are presented here. Minimum Mean Square Error (MMSE) criterion is employed to optimize filter coefficients into the corresponding CSD numbers. To further improve coefficients optimization process, an extra non-zero bit is added for any filter coefficients exceeding 1/2. This technique improves frequency response of filter without increasing filter complexity almost at all. The simulation results show outstanding performance in bit-error-rate (BER) curve for all CSD implemented digital filters included in this presentation material.

Kim, Heechul

1996-01-01

399

NASA Technical Reports Server (NTRS)

Many microgravity space-science experiments require active vibration isolation, to attain suitably low levels of background acceleration for useful experimental results. The design of state-space controllers by optimal control methods requires judicious choices of frequency-weighting design filters. Kinematic coupling among states greatly clouds designer intuition in the choices of these filters, and the masking effects of the state observations cloud the process further. Recent research into the practical application of H2 synthesis methods to such problems, indicates that certain steps can lead to state frequency-weighting design-filter choices with substantially improved promise of usefulness, even in the face of these difficulties. In choosing these filters on the states, one considers their relationships to corresponding design filters on appropriate pseudo-sensitivity- and pseudo-complementary-sensitivity functions. This paper investigates the application of these considerations to a single-degree-of-freedom microgravity vibration-isolation test case. Significant observations that were noted during the design process are presented. along with explanations based on the existent theory for such problems.

Hampton, R. David; Whorton, Mark S.

2000-01-01

400

Adaptive order-statistic filters for sea mine classification

NASA Astrophysics Data System (ADS)

This paper presents a novel formulation of an adaptive order- statistic filter, and describes the performance enhancements it provides to an automatic sea mine classification system. Non-linear filters based on order statistics (median, 'largest-of,' etc.) have been shown to be effective in suppressing noise with long, heavy-tailed density functions (e.g., Laplacian), and they have also been successfully used to suppress 'salt-and-pepper' noise in image processing, as well as transients and Raleigh-distributed speckle noise in ultrasound imaging. Such 'order-statistic' filters can be adaptively generalized and optimized, for a given data set, by finding the weights that, operating on ordered data samples, minimize filter output power while preserving signals that are constant within the filter window. Morphological filters can also be optimized in this manner, since they have been shown to consist of combinations of order-statistic filters. A new adaptive order-statistic filter formulation, enabling the preservation of signals that are not constant within the filter window, has been developed and its efficacy demonstrated with side-scan sonar imagery data. Using these filters as a non-linear 'corrector' of the outputs of the linear clutter-filtering stage of a sea mine classification system, reduced the number of false alarms by an order of magnitude.

Fernandez, Manuel F.; Aridgides, Tom

1998-09-01

401

A superior edge preserving filter with a systematic analysis

NASA Technical Reports Server (NTRS)

A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data.

Holladay, Kenneth W.; Rickman, Doug

1991-01-01

402

Multiple model cardinalized probability hypothesis density filter

NASA Astrophysics Data System (ADS)

The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

Georgescu, Ramona; Willett, Peter

2011-09-01

403

Gabor filter based fingerprint image enhancement

NASA Astrophysics Data System (ADS)

Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

Wang, Jin-Xiang

2013-03-01

404

A method for improving time-stepping numerics

NASA Astrophysics Data System (ADS)

In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.

Williams, P. D.

2012-04-01

405

NSDL National Science Digital Library

This is to practice and review single step equations. Have fun. Complete the following two sites. Follow the directions given for each site. One-Step Equations Add/Subtract One-Step Equations Mult/Division When you have finished the sites above, enter equation buster and work through level one. Equation buster ...

Ms. Reddish

2011-09-30

406

Cordierite silicon nitride filters

The objective of this project was to develop a silicon nitride based crossflow filter. This report summarizes the findings and results of the project. The project was phased with Phase I consisting of filter material development and crossflow filter design. Phase II involved filter manufacturing, filter testing under simulated conditions and reporting the results. In Phase I, Cordierite Silicon Nitride (CSN) was developed and tested for permeability and strength. Target values for each of these parameters were established early in the program. The values were met by the material development effort in Phase I. The crossflow filter design effort proceeded by developing a macroscopic design based on required surface area and estimated stresses. Then the thermal and pressure stresses were estimated using finite element analysis. In Phase II of this program, the filter manufacturing technique was developed, and the manufactured filters were tested. The technique developed involved press-bonding extruded tiles to form a filter, producing a monolithic filter after sintering. Filters manufactured using this technique were tested at Acurex and at the Westinghouse Science and Technology Center. The filters did not delaminate during testing and operated and high collection efficiency and good cleanability. Further development in areas of sintering and filter design is recommended.

Sawyer, J.; Buchan, B. (Acurex Environmental Corp., Mountain View, CA (United States)); Duiven, R.; Berger, M. (Aerotherm Corp., Mountain View, CA (United States)); Cleveland, J.; Ferri, J. (GTE Products Corp., Towanda, PA (United States))

1992-02-01

407

Filter type gas sampler with filter consolidation

Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, whereafter the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant.

Miley, Harry S. (219 Rockwood Dr., Richland, WA 99352); Thompson, Robert C. (5313 Phoebe La., West Richland, WA 99352); Hubbard, Charles W. (1900 Stevens, Apt. 526, Richland, WA 99352); Perkins, Richard W. (1413 Sunset, Richland, WA 99352)

1997-01-01

408

Filter type gas sampler with filter consolidation

Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, where after the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant. 5 figs.

Miley, H.S.; Thompson, R.C.; Hubbard, C.W.; Perkins, R.W.

1997-03-25

409

Step by Step tutorial Powder XRD Short-Arm

Step by Step tutorial for Powder XRD Short-Arm Data Collection #12;Creating the Parameter File Step to Step 5 directly. 1. Click on the XRD wizard icon #12;Step 2. You will get the XRD Wizard program window

Meagher, Mary

410

Bayesian filtering in electronic surveillance

NASA Astrophysics Data System (ADS)

Fusion of passive electronic support measures (ESM) with active radar data enables tracking and identification of platforms in air, ground, and maritime domains. An effective multi-sensor fusion architecture adopts hierarchical real-time multi-stage processing. This paper focuses on the recursive filtering challenges. The first challenge is to achieve effective platform identification based on noisy emitter type measurements; we show that while optimal processing is computationally infeasible, a good suboptimal solution is available via a sequential measurement processing approach. The second challenge is to process waveform feature measurements that enable disambiguation in multi-target scenarios where targets may be using the same emitters. We show that an approach that explicitly considers the Markov jump process outperforms the traditional Kalman filtering solution.

Coraluppi, Stefano; Carthel, Craig

2012-06-01

411

NSDL National Science Digital Library

In this video segment adapted from ZOOM, cast members try to make the most effective water filter. They experiment with filtering dirty, salty water through different combinations of sand, gravel, and a cotton bandana.

2005-12-17

412

NASA Technical Reports Server (NTRS)

A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.

Nagle, H. T., Jr.

1972-01-01

413

40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

Code of Federal Regulations, 2010 CFR

... 2010-07-01 false PM sampling media (e.g., filters) preconditioning and...Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and...the following steps to prepare PM sampling media (e.g., filters) and equipment...

2010-07-01

414

Novel Backup Filter Device for Candle Filters

The currently preferred means of particulate removal from process or combustion gas generated by advanced coal-based power production processes is filtration with candle filters. However, candle filters have not shown the requisite reliability to be commercially viable for hot gas clean up for either integrated gasifier combined cycle (IGCC) or pressurized fluid bed combustion (PFBC) processes. Even a single candle failure can lead to unacceptable ash breakthrough, which can result in (a) damage to highly sensitive and expensive downstream equipment, (b) unacceptably low system on-stream factor, and (c) unplanned outages. The U.S. Department of Energy (DOE) has recognized the need to have fail-safe devices installed within or downstream from candle filters. In addition to CeraMem, DOE has contracted with Siemens-Westinghouse, the Energy & Environmental Research Center (EERC) at the University of North Dakota, and the Southern Research Institute (SRI) to develop novel fail-safe devices. Siemens-Westinghouse is evaluating honeycomb-based filter devices on the clean-side of the candle filter that can operate up to 870 C. The EERC is developing a highly porous ceramic disk with a sticky yet temperature-stable coating that will trap dust in the event of filter failure. SRI is developing the Full-Flow Mechanical Safeguard Device that provides a positive seal for the candle filter. Operation of the SRI device is triggered by the higher-than-normal gas flow from a broken candle. The CeraMem approach is similar to that of Siemens-Westinghouse and involves the development of honeycomb-based filters that operate on the clean-side of a candle filter. The overall objective of this project is to fabricate and test silicon carbide-based honeycomb failsafe filters for protection of downstream equipment in advanced coal conversion processes. The fail-safe filter, installed directly downstream of a candle filter, should have the capability for stopping essentially all particulate bypassing a broken or leaking candle while having a low enough pressure drop to allow the candle to be backpulse-regenerated. Forward-flow pressure drop should increase by no more than 20% because of incorporation of the fail-safe filter.

Bishop, B.; Goldsmith, R.; Dunham, G.; Henderson, A.

2002-09-18

415

Advanced CSS Layouts: Step by Step

NSDL National Science Digital Library

Most Web sites are designed with HTML tables, which can be an arduous task. Making sites that are accessible and standards-compliant requires a separation of markup and content, and CSS is the best way to accomplish this. This Web page by Rogelio Vizcaino Lizaola and Andy King offers a step-by-step CSS layout tutorial on how to create WebReference table-like layouts (that behave well with small window sizes and large fonts), while avoiding some of the bugs and problems discovered in other implementations. Target browsers include all of the generation five and greater browsers on both Windows and Macintosh platforms.

King, Andy.

416

According to an exemplary embodiment of the present disclosure, a system for removing matter from a filtering device includes a gas pressurization assembly. An element of the assembly is removably attachable to a first orifice of the filtering device. The system also includes a vacuum source fluidly connected to a second orifice of the filtering device.

Sellers, Cheryl L. (Peoria, IL); Nordyke, Daniel S. (Arlington Heights, IL); Crandell, Richard A. (Morton, IL); Tomlins, Gregory (Peoria, IL); Fei, Dong (Peoria, IL); Panov, Alexander (Dunlap, IL); Lane, William H. (Chillicothe, IL); Habeger, Craig F. (Chillicothe, IL)

2008-12-09

417

Practical Active Capacitor Filter

NASA Technical Reports Server (NTRS)

A method and apparatus is described that filters an electrical signal. The filtering uses a capacitor multiplier circuit where the capacitor multiplier circuit uses at least one amplifier circuit and at least one capacitor. A filtered electrical signal results from a direct connection from an output of the at least one amplifier circuit.

Shuler, Robert L., Jr. (Inventor)

2005-01-01

418

In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc. PMID:23599054

He, Kaiming; Sun, Jian; Tang, Xiaoou

2013-06-01

419

A low viscosity resin is delivered into a spent HEPA filter or other waste. The resin is introduced into the filter or other waste using a vacuum to assist in the mass transfer of the resin through the filter media or other waste.

Gates-Anderson, Dianne D. (Union City, CA); Kidd, Scott D. (Brentwood, CA); Bowers, John S. (Manteca, CA); Attebery, Ronald W. (San Lorenzo, CA)

2003-01-01

420

Denoising jet engine gas path measurements using nonlinear filters

Traditionally, linear filters have been used to smooth time series of gas path measurements before performing fault detection and isolation. However, linear filters can smooth out sharp trend shifts in the signal and are also not good at removing outliers. Since most fault detection and isolation algorithms are optimized for Gaussian noise, they can show performance degradation when outliers are

Rajeev Verma; Ranjan Ganguli

2005-01-01

421

Kalman Filtering In Estimation of Multi Sensor Rain Rates

An application is shown of Kalman filtering theory to quantitative assessment of rain rates at several scales. Noisy measurements from sensors with different resolutions are merged together to obtain optimal estimates. A scale-recursive filter is adopted accounting for both the variability of the process and the reliability of measurements. Rainfall variability in space is modeled through the theory of random

D. Bocchiola; D. B. McLaughlin; D. Entekhabi

2002-01-01

422

Birefringent filter design by use of a modified genetic algorithm

Birefringent filter design by use of a modified genetic algorithm Mengtao Wen and Jianping Yao A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation angles and the element lengths are determined by the genetic algorithm to minimize the sidelobe levels

Yao, Jianping

423

Facial Expression Recognition Analysis with Muti-Scale Filter

NASA Astrophysics Data System (ADS)

The design of filters is the key step of facial expression extraction. Frequency and orientation of the filters can simulate those of the human visual system, and they have the characteristics of being particularly appropriate for texture representation and discrimination. The paper presents the wavelet filter provided with 3 frequencies, 8 orientations. In according to actual demand, it can extract the feature of low quality facial expression image target, and have good robust for automatic facial expression recognition. Experimental results show that the performance of the proposed filter achieved excellent average recognition rates, when it is applied to facial expression recognition system.

Ou, Jun

424

GLOBAL CONVERGENCE OF SLANTING FILTER METHODS FOR ...

inexact restoration method, we prove stationarity of all accumulation points of the sequence. Key words. ... nance, borrowed from multi-criteria optimization. A filter algorithm ...... Iow we explain what we mean by ¢solving approximately£. Given z e X ..... A trust region method based on interior point techniques for nonlinear ...

2006-10-24

425

Efficient and reliable schemes for nonlinear diffusion filtering

Nonlinear diffusion filtering is usually performed with explicit schemes. They are only stable for very small time steps, which leads to poor efficiency and limits their practical use. Based on a recent discrete nonlinear diffusion scale-space framework we present semi-implicit schemes which are stable for all time steps. These novel schemes use an additive operator split- ting (AOS), which guarantees

Joachim Weickert; Bart M. Ter Haar Romeny; Max A. Viergever

1998-01-01

426

Computer-aided design of recursive digital filters

A practical method is described for designing recursive digital filters with arbitrary, prescribed magnitude characteristics. The method uses the Fletcher-Powell optimization algorithm to minimize a square-error criterion in the frequency domain. A strategy is described whereby stability and minimum-phase constraints are observed, while still using the unconstrained optimization algorithm. The cascade canonic form is used, so that the resultant filters

KENNETH STEIGLITZ

1970-01-01

427

Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

NASA Technical Reports Server (NTRS)

Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

Simon, Dan; Simon, Donald L.

2005-01-01

428

Regenerative particulate filter development

NASA Technical Reports Server (NTRS)

Development, design, and fabrication of a prototype filter regeneration unit for regenerating clean fluid particle filter elements by using a backflush/jet impingement technique are reported. Development tests were also conducted on a vortex particle separator designed for use in zero gravity environment. A maintainable filter was designed, fabricated and tested that allows filter element replacement without any leakage or spillage of system fluid. Also described are spacecraft fluid system design and filter maintenance techniques with respect to inflight maintenance for the space shuttle and space station.

Descamp, V. A.; Boex, M. W.; Hussey, M. W.; Larson, T. P.

1972-01-01

429

Compact planar microwave blocking filters

NASA Technical Reports Server (NTRS)

A compact planar microwave blocking filter includes a dielectric substrate and a plurality of filter unit elements disposed on the substrate. The filter unit elements are interconnected in a symmetrical series cascade with filter unit elements being organized in the series based on physical size. In the filter, a first filter unit element of the plurality of filter unit elements includes a low impedance open-ended line configured to reduce the shunt capacitance of the filter.

U-Yen, Kongpop (Inventor); Wollack, Edward J. (Inventor)

2012-01-01

430

NASA Technical Reports Server (NTRS)

In a closed loop control system that governs the movement of an actuator a filter is provided that attenuates the oscillations generated by the actuator when the actuator is at a resonant frequency. The filter is preferably coded into the control system and includes the following steps. Sensing the position of the actuator with an LVDT and sensing the motor position where motor drives the actuator through a gear train. When the actuator is at a resonant frequency, a lag is applied to the LVDT signal and then combined with the motor position signal to form a combined signal in which the oscillation generated by the actuator are attenuated. The control system then controls ion this combined signal. This arrangement prevents the amplified resonance present on the LVDT signal, from causing control instability, while retaining the steady state accuracy associated with the LVDT signal. It is also a characteristic of this arrangement that the signal attenuation will always coincide with the load resonance frequency of the system so that variations in the resonance frequency will not effectuate the effectiveness of the filter.

Evans, Paul S. (Inventor)

2001-01-01

431

Stratified Filtered Sampling in Stochastic Optimization

's advantages with a problem in asset / liability management for an insurance company. Keywords: variance the decisionÂmaker's attitude towards risk, as Section 5 will demonstrate.) An asset management example and selling of assets) at the start of each year. The set of possible buying and selling decisions

Mitchell, John E.

432

Ceramic fiber filter technology

Fibrous filters have been used for centuries to protect individuals from dust, disease, smoke, and other gases or particulates. In the 1970s and 1980s ceramic filters were developed for filtration of hot exhaust gases from diesel engines. Tubular, or candle, filters have been made to remove particles from gases in pressurized fluidized-bed combustion and gasification-combined-cycle power plants. Very efficient filtration is necessary in power plants to protect the turbine blades. The limited lifespan of ceramic candle filters has been a major obstacle in their development. The present work is focused on forming fibrous ceramic filters using a papermaking technique. These filters are highly porous and therefore very lightweight. The papermaking process consists of filtering a slurry of ceramic fibers through a steel screen to form paper. Papermaking and the selection of materials will be discussed, as well as preliminary results describing the geometry of papers and relative strengths.

Holmes, B.L.; Janney, M.A.

1996-06-01

433

NSDL National Science Digital Library

In this lesson activity students use nonstandard units (baby steps) to measure lengths of different types of "steps" (giant, regular, umbrella, scissor, wooden-soldier, and backwards steps). Once each student gathers this data they will display their own data on a bar graph. Then the class will discuss the data and compare graphs among students. A students worksheet for data collection is included in PDF format.

Helene Silverman

2008-01-01

434

Aircraft Recirculation Filter for Air-Quality and Incident Assessment

The current research examines the possibility of using recirculation filters from aircraft to document the nature of air-quality incidents on aircraft. These filters are highly effective at collecting solid and liquid particulates. Identification of engine oil contaminants arriving through the bleed air system on the filter was chosen as the initial focus. A two-step study was undertaken. First, a compressor/bleed air simulator was developed to simulate an engine oil leak, and samples were analyzed with gas chromatograph-mass spectrometry. These samples provided a concrete link between tricresyl phosphates and a homologous series of synthetic pentaerythritol esters from oil and contaminants found on the sample paper. The second step was to test 184 used aircraft filters with the same gas chromatograph-mass spectrometry system; of that total, 107 were standard filters, and 77 were nonstandard. Four of the standard filters had both markers for oil, with the homologous series synthetic pentaerythritol esters being the less common marker. It was also found that 90% of the filters had some detectable level of tricresyl phosphates. Of the 77 nonstandard filters, 30 had both markers for oil, a significantly higher percent than the standard filters. PMID:25641977

Eckels, Steven J.; Jones, Byron; Mann, Garrett; Mohan, Krishnan R.; Weisel, Clifford P.

2015-01-01

435

NASA Astrophysics Data System (ADS)

Giant steps is a technique to accelerate Monte Carlo radiative transfer in optically-thick cells (which are isotropic and homogeneous in matter properties and into which astrophysical atmospheres are divided) by greatly reducing the number of Monte Carlo steps needed to propagate photon packets through such cells. In an optically-thick cell, packets starting from any point (which can be regarded a point source) well away from the cell wall act essentially as packets diffusing from the point source in an infinite, isotropic, homogeneous atmosphere. One can replace many ordinary Monte Carlo steps that a packet diffusing from the point source takes by a randomly directed giant step whose length is slightly less than the distance to the nearest cell wall point from the point source. The giant step is assigned a time duration equal to the time for the RMS radius for a burst of packets diffusing from the point source to have reached the giant step length. We call assigning giant-step time durations this way RMS-radius (RMSR) synchronization. Propagating packets by series of giant steps in giant-steps random walks in the interiors of optically-thick cells constitutes the technique of giant steps. Giant steps effectively replaces the exact diffusion treatment of ordinary Monte Carlo radiative transfer in optically-thick cells by an approximate diffusion treatment. In this paper, we describe the basic idea of giant steps and report demonstration giant-steps flux calculations for the grey atmosphere. Speed-up factors of order 100 are obtained relative to ordinary Monte Carlo radiative transfer. In practical applications, speed-up factors of