Science.gov

Sample records for optimized filtering step

  1. STEPS: A Grid Search Methodology for Optimized Peptide Identification Filtering of MS/MS Database Search Results

    SciTech Connect

    Piehowski, Paul D.; Petyuk, Vladislav A.; Sandoval, John D.; Burnum, Kristin E.; Kiebel, Gary R.; Monroe, Matthew E.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2013-03-01

    For bottom-up proteomics there are a wide variety of database searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection - referred to as STEPS - utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.

  2. Optimization of integrated polarization filters.

    PubMed

    Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J

    2014-10-01

    This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics. PMID:25360980

  3. Optimal compositions of soft morphological filters

    NASA Astrophysics Data System (ADS)

    Koivisto, Pertti T.; Huttunen, Heikki; Kuosmanen, Pauli

    1995-03-01

    Soft morphological filters form a large class of nonlinear filters with many desirable properties. However, few design methods exist for these filters and in the existing methods the selection of the filter composition tends to be ad-hoc and application specific. This paper demonstrates how optimization schemes, simulated annealing and genetic algorithms, can be employed in the search for optimal soft morphological filter sequences realizing optimal performance in a given signal processing task. This paper describes also the modifications in the optimization schemes required to obtain sufficient convergence.

  4. Optimally (Distributional-)Robust Kalman Filtering

    E-print Network

    Ruckdeschel, Peter

    Optimally (Distributional-)Robust Kalman Filtering Peter Ruckdeschel Peter Ruckdeschel Fraunhofer.Ruckdeschel@itwm.fraunhofer.de Abstract: We present optimality results for robust Kalman filtering where robustness is understood classifications: Primary 93E11; secondary 62F35. Keywords and phrases: robustness, Kalman Filter, innovation

  5. Improving particle filters in rainfall-runoff models: Application of the resample-move step and the ensemble Gaussian particle filter

    NASA Astrophysics Data System (ADS)

    Plaza Guingla, Douglas A.; Keyser, Robin; Lannoy, GabriëLle J. M.; Giustarini, Laura; Matgen, Patrick; Pauwels, Valentijn R. N.

    2013-07-01

    The objective of this paper is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. The results indicate that the inclusion of the resample-move step in the standard particle filter and the use of an optimal importance density function in the Gaussian particle filter improve the effectiveness of particle filters. Moreover, an optimization of the forecast ensemble used in this study allowed for a better performance of the modified Gaussian particle filter compared to the particle filter with resample-move step.

  6. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

    2002-06-30

    Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program through cold flow and high-temperature testing. The Blasch, mullite-bonded alumina sheet filter element is the only candidate currently approaching qualification for demonstration, although this oxide-based, monolithic sheet filter element may be restricted to operating temperatures of 538 C (1000 F) or less. Many other types of ceramic and intermetallic sheet filter elements could be fabricated. The estimated capital cost of the sheet filter system is comparable to the capital cost of the standard candle filter system, although this cost estimate is very uncertain because the commercial price of sheet filter element manufacturing has not been established. The development of the sheet filter system could result in a higher reliability and availability than the standard candle filter system, but not as high as that of the inverted candle filter system. The sheet filter system has not reached the same level of development as the inverted candle filter system, and it will require more design development, filter element fabrication development, small-scale testing and evaluation before larger-scale testing could be recommended.

  7. Optimal frequency domain textural edge detection filter

    NASA Technical Reports Server (NTRS)

    Townsend, J. K.; Shanmugan, K. S.; Frost, V. S.

    1985-01-01

    An optimal frequency domain textural edge detection filter is developed and its performance evaluated. For the given model and filter bandwidth, the filter maximizes the amount of output image energy placed within a specified resolution interval centered on the textural edge. Filter derivation is based on relating textural edge detection to tonal edge detection via the complex low-pass equivalent representation of narrowband bandpass signals and systems. The filter is specified in terms of the prolate spheroidal wave functions translated in frequency. Performance is evaluated using the asymptotic approximation version of the filter. This evaluation demonstrates satisfactory filter performance for ideal and nonideal textures. In addition, the filter can be adjusted to detect textural edges in noisy images at the expense of edge resolution.

  8. Adaptive Mallow's optimization for weighted median filters

    NASA Astrophysics Data System (ADS)

    Rachuri, Raghu; Rao, Sathyanarayana S.

    2002-05-01

    This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.

  9. Steps Toward Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2006-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum ofsers preferences, when the preference values are private information of the users. We bypass this

  10. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  11. Dual Adaptive Filtering by Optimal Projection Applied to Filter Muscle Artifacts on EEG and Comparative Study

    PubMed Central

    Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

    2014-01-01

    Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30?Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30?Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

  12. Dual adaptive filtering by optimal projection applied to filter muscle artifacts on EEG and comparative study.

    PubMed

    Boudet, Samuel; Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

    2014-01-01

    Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

  13. Decomposition schemes with optimal soft morphological denoising filters

    NASA Astrophysics Data System (ADS)

    Koivisto, Pertti T.; Huttunen, Heikki; Kuosmanen, Pauli

    1997-04-01

    The filtering performance of the soft morphological filters in decomposition schemes is studied. Optimal soft morphological filters for the filtering of the decomposition bands are sought and their properties are analyzed. The performance and properties of the optimal filters found are compared to those of the corresponding optimal composite soft morphological filters. Also, the applicability of different decomposition methods, especially those related to soft morphological filters, is studied.

  14. Advanced Stepped-Impedance Dual-Band Filters With Wide Second Stopbands

    E-print Network

    Bornemann, Jens

    Advanced Stepped-Impedance Dual-Band Filters With Wide Second Stopbands Marjan Mokhtaari 1 , K, Royal Military College of Canada, Kingston, ON, Canada K7K 7B4 Abstract--Advanced dual-band stepped-impedance for advanced dual-band filter applications. Filter design; stepped-impedance resonators; dual-band filters

  15. Optimization Integrator for Large Time Steps.

    PubMed

    Gast, Theodore F; Schroeder, Craig; Stomakhin, Alexey; Jiang, Chenfanfu; Teran, Joseph M

    2015-10-01

    Practical time steps in today's state-of-the-art simulators typically rely on Newton's method to solve large systems of nonlinear equations. In practice, this works well for small time steps but is unreliable at large time steps at or near the frame rate, particularly for difficult or stiff simulations. We show that recasting backward Euler as a minimization problem allows Newton's method to be stabilized by standard optimization techniques with some novel improvements of our own. The resulting solver is capable of solving even the toughest simulations at the [Formula: see text] frame rate and beyond. We show how simple collisions can be incorporated directly into the solver through constrained minimization without sacrificing efficiency. We also present novel penalty collision formulations for self collisions and collisions against scripted bodies designed for the unique demands of this solver. Finally, we show that these techniques improve the behavior of Material Point Method (MPM) simulations by recasting it as an optimization problem. PMID:26357249

  16. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.

  17. Optimal design of active EMC filters

    NASA Astrophysics Data System (ADS)

    Chand, B.; Kut, T.; Dickmann, S.

    2013-07-01

    A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.

  18. Training-based optimization of soft morphological filters

    NASA Astrophysics Data System (ADS)

    Koivisto, Pertti; Huttunen, Heikki; Kuosmanen, Pauli

    1996-07-01

    Soft morphological filters form a large class of nonlinear filters with many desirable properties. However, few design methods exist for these filters. This paper demonstrates how optimization schemes, simulated annealing and genetic algorithms, can be employed in the search for soft morphological filters having optimal performance in a given signal processing task. Furthermore, the properties of the achieved optimal soft morphological filters in different situations are analyzed.

  19. A SIMULATION-BASED OPTIMIZATION APPROACH TO POLYMER EXTRUSION FILTER

    E-print Network

    Jenkins, Lea

    A SIMULATION-BASED OPTIMIZATION APPROACH TO POLYMER EXTRUSION FILTER DESIGN K.R. Fowler1 S.M. La methods for finding optimal parameters for the filter such that its lifetime is maximized, while placing model that describes the deposition of debris particles in the filter. Optimization algorithms are used

  20. Variable-step-size LMS adaptive filter for digital chromatic dispersion compensation in PDM-QPSK coherent transmission system

    NASA Astrophysics Data System (ADS)

    Xu, Tianhua; Jacobsen, Gunnar; Popov, Sergei; Li, Jie; Wang, Ke; Friberg, Ari T.

    2009-11-01

    High bit rates optical communication systems pose the challenge of their tolerance to linear and nonlinear fiber impairments. Digital filters in coherent optical receivers can be used to mitigate the chromatic dispersion entirely in the optical transmission system. In this paper, the least mean square adaptive filter has been developed for chromatic equalization in a 112-Gbit/s polarization division multiplexed quadrature phase shift keying coherent optical transmission system established on the VPIphotonics simulation platform. It is found that the chromatic dispersion equalization shows a better performance when a smaller step size is used. However, the smaller step size in least mean square filter will lead to a slower iterative operation to achieve the guaranteed convergence. In order to solve this contradiction, an adaptive filter employing variable-step-size least mean square algorithm is proposed to compensate the chromatic dispersion in the 112-Gbit/s coherent communication system. The variable-step-size least mean square filter could make a compromise and optimization between the chromatic dispersion equalization performance and the algorithm converging speed. Meanwhile, the required tap number and the converged tap weights distribution of the variable-step-size least mean square filter for a certain fiber chromatic dispersion are analyzed and discussed in the investigation of the filter feature.

  1. Stepped Impedance Resonators in Triple Band Bandpass Filter Design for Wireless Communication Systems

    SciTech Connect

    Eroglu, Abdullah

    2010-01-01

    Triple band microstrip tri-section bandpass filter using stepped impedance resonators (SIRs) is designed, simulated, built, and measured using hair pin structure. The complete design procedure is given from analytical stage to implementation stage with details The coupling between SIRs is investigated for the first time in detail by studying their effect on the filter characteristics including bandwidth, and attenuation to optimize the filter perfomance. The simulation of the filler is performed using method of moment based 2.5D planar electromagnetic simulator The filter is then implemented on RO4003 material and measured The simulation, and measured results are compared and found to be my close. The effect of coupling on the filter performance is then investigated using electromagnetic simulator It is shown that the coupling effect between SIRs can be used as a design knob to obtain a bandpass Idler with a better performance jar the desired frequency band using the proposed filter topology The results of this work can used in wireless communication systems where multiple frequency bandy are needed

  2. FIR Filter Design via Spectral Factorization and Convex Optimization 1 FIR Filter Design via Spectral Factorization

    E-print Network

    FIR Filter Design via Spectral Factorization and Convex Optimization 1 FIR Filter Design via UCSB 10 24 97 FIR Filter Design via Spectral Factorization and Convex Optimization 2 Outline Convex optimization & interior-point methods FIR lters & magnitude specs Spectral factorization Examples lowpass lter

  3. Optimal digital filtering for tremor suppression.

    PubMed

    Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R

    2000-05-01

    Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:¿tremor-suppression.com. PMID:10851810

  4. Optimal Estimation 5.3 State Space Kalman Filters

    E-print Network

    Nourbakhsh, Illah

    Chapter 5 Optimal Estimation Part 3 5.3 State Space Kalman Filters Mobile Robotics - Prof Alonzo Kelly, CMU RI1 #12;Outline · 5.3 State Space Kalman Filters ­ 5.3.1 Introduction ­ 5.3.2 Linear Discrete Time Kalman Filter ­ 5.3.3 Kalman Filters for Nonlinear Systems ­ 5.3.4 Simple Example: 2D Mobile Robot

  5. GNSS data filtering optimization for ionospheric observation

    NASA Astrophysics Data System (ADS)

    D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.

    2015-12-01

    In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15°, 20° or 30°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy site under quiet ionospheric conditions, the SOLIDIFY optimization maximizes the quality, instead of the quantity, of the data.

  6. Optimal edge filters explain human blur detection.

    PubMed

    McIlhagga, William H; May, Keith A

    2012-01-01

    Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222

  7. Evolutionary Gabor Filter Optimization with Application to Vehicle Detection

    E-print Network

    Bebis, George

    1 Evolutionary Gabor Filter Optimization with Application to Vehicle Detection Zehang Sun1 , George of Gabor filters in pattern classification, their design and selection have been mostly done on a trial and error basis. Existing techniques are either only suitable for a small number of filters or less problem

  8. Optimization of photon correlations by frequency filtering

    NASA Astrophysics Data System (ADS)

    González-Tudela, Alejandro; del Valle, Elena; Laussy, Fabrice P.

    2015-04-01

    Photon correlations are a cornerstone of quantum optics. Recent works [E. del Valle, New J. Phys. 15, 025019 (2013), 10.1088/1367-2630/15/2/025019; A. Gonzalez-Tudela et al., New J. Phys. 15, 033036 (2013), 10.1088/1367-2630/15/3/033036; C. Sanchez Muñoz et al., Phys. Rev. A 90, 052111 (2014), 10.1103/PhysRevA.90.052111] have shown that by keeping track of the frequency of the photons, rich landscapes of correlations are revealed. Stronger correlations are usually found where the system emission is weak. Here, we characterize both the strength and signal of such correlations, through the introduction of the "frequency-resolved Mandel parameter." We study a plethora of nonlinear quantum systems, showing how one can substantially optimize correlations by combining parameters such as pumping, filtering windows and time delay.

  9. Optimal filtering of the LISA data

    E-print Network

    Andrzej Krolak; Massimo Tinto; Michele Vallisneri

    2007-07-19

    The LISA time-delay-interferometry responses to a gravitational-wave signal are rewritten in a form that accounts for the motion of the LISA constellation around the Sun; the responses are given in closed analytic forms valid for any frequency in the band accessible to LISA. We then present a complete procedure, based on the principle of maximum likelihood, to search for stellar-mass binary systems in the LISA data. We define the required optimal filters, the amplitude-maximized detection statistic (analogous to the F statistic used in pulsar searches with ground-based interferometers), and discuss the false-alarm and detection probabilities. We test the procedure in numerical simulations of gravitational-wave detection.

  10. Multispectral image denoising with optimized vector bilateral filter.

    PubMed

    Peng, Honghong; Rao, Raghuveer; Dianat, Sohail A

    2014-01-01

    Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios (SNRs). Typical vector bilateral filtering described in the literature does not use parameters satisfying optimality criteria. We introduce an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimization of the Stein's unbiased risk estimate of this nonlinear estimator. Along the way, we provide a plausibility argument through an analytical example as to why vector bilateral filtering outperforms bandwise 2D bilateral filtering in enhancing SNR. Experimental results show that the optimized vector bilateral filter provides improved denoising performance on multispectral images when compared with several other approaches. PMID:24184727

  11. Optimizing filtered backprojection reconstruction for a breast tomosynthesis prototype device

    NASA Astrophysics Data System (ADS)

    Mertelmeier, Thomas; Orman, Jasmina; Haerer, Wolfgang; Dudam, Mithun K.

    2006-03-01

    Digital breast tomosynthesis is a new technique intended to overcome the limitations of conventional projection mammography by reconstructing slices through the breast from projection views acquired from different angles with respect to the breast. We formulate a general theory of filtered backprojection reconstruction for linear tomosynthesis. The filtering step consists of an MTF inversion filter, a spectral filter, and a slice thickness filter. In this paper the method is applied first to simulated data to understand the basic effects of the various filtering steps. We then demonstrate the impact of the filter functions with simulated projections and with clinical data acquired with a research breast tomosynthesis system.** With this reconstruction method the image quality can be controlled regarding noise and spatial resolution. In a wide range of spatial frequencies the slice thickness can be kept constant and artifacts caused by the incompleteness of the data can be suppressed.

  12. Dual-Band Stepped-Impedance Filters For Ultra-Wideband Applications

    E-print Network

    Bornemann, Jens

    Dual-Band Stepped-Impedance Filters For Ultra-Wideband Applications Marjan Mokhtaari# , Jens zeros within each passband. Parallel open-end high impedance segments in stepped-impedance resonators the multiple resonance properties of stepped-impedance resonators (SIR's), which are also commonly used in dual

  13. Initial steps of inactivation at the K+ channel selectivity filter

    PubMed Central

    Thomson, Andrew S.; Heer, Florian T.; Smith, Frank J.; Hendron, Eunan; Bernèche, Simon; Rothberg, Brad S.

    2014-01-01

    K+ efflux through K+ channels can be controlled by C-type inactivation, which is thought to arise from a conformational change near the channel’s selectivity filter. Inactivation is modulated by ion binding near the selectivity filter; however, the molecular forces that initiate inactivation remain unclear. We probe these driving forces by electrophysiology and molecular simulation of MthK, a prototypical K+ channel. Either Mg2+ or Ca2+ can reduce K+ efflux through MthK channels. However, Ca2+, but not Mg2+, can enhance entry to the inactivated state. Molecular simulations illustrate that, in the MthK pore, Ca2+ ions can partially dehydrate, enabling selective accessibility of Ca2+ to a site at the entry to the selectivity filter. Ca2+ binding at the site interacts with K+ ions in the selectivity filter, facilitating a conformational change within the filter and subsequent inactivation. These results support an ionic mechanism that precedes changes in channel conformation to initiate inactivation. PMID:24733889

  14. An optimal blind temporal motion blur deconvolution filter Yohann Tendero

    E-print Network

    Ferguson, Thomas S.

    An optimal blind temporal motion blur deconvolution filter Yohann Tendero and Jean Michel Morel filter restoring blindly any nonuniform motion blur with an amplitude below one pixel per frame: further examples and C++ implementation are available at http: // www. math. ucla. edu/ ~ tendero/ blind

  15. Polymeric wavelength division multiplexing coupler with fiber guide and filter trench for bidirectional communication fabricated by one-step replication

    NASA Astrophysics Data System (ADS)

    Sugihara, Okihiro; Kaino, Toshikuni; Susanto Tan, Freddy

    2014-08-01

    We fabricated and characterized a polymeric bidirectional wavelength division multiplexing (WDM) coupler for a graded-index plastic optical fiber (GI-POF). We fabricated the device with fiber guides and a filter trench by a simple one-step soft lithography method. We evaluated the performance of the fabricated device with an optimal design structure at wavelengths of 850 and 790 nm. We determined the insertion loss, isolation, and directivity to be less than 3 dB, more than 20 dB, and more than 20 dB, respectively. We also demonstrated bidirectional communications with more than 2.5 Gbps/? for an optical fiber length of 150 m by aligning the GI-POF in the fiber guides and inserting a WDM filter in the filter trench with passive alignment.

  16. Optimally Robust Kalman Filtering at Work: AO-, IO-, and Simultaneously IO-and AO-Robust Filters

    E-print Network

    Ruckdeschel, Peter

    Optimally Robust Kalman Filtering at Work: AO-, IO-, and Simultaneously IO- and AO- Robust Filters Abstract We take up optimality results for robust Kalman filtering from Ruckdeschel (2001, 2010) where. (2006), Fried et al. (2007). Keywords: robustness, Kalman Filter, innovation outlier, additive outlier

  17. Optimal filter systems for photometric redshift estimation

    E-print Network

    N. Benitez; M. Moles; J. A. L. Aguerri; E. Alfaro; T. Broadhurst; J. Cabrera; F. J. Castander; J. Cepa; M. Cervino; D. Cristobal-Hornillos; A. Fernandez-Soto; R. M. Gonzalez-Delgado; L. Infante; I. Marquez; V. J. Martinez; J. Masegosa; A. Del Olmo; J. Perea; F. Prada; J. M. Quintana; S. F. Sanchez

    2008-12-18

    In the next years, several cosmological surveys will rely on imaging data to estimate the redshift of galaxies, using traditional filter systems with 4-5 optical broad bands; narrower filters improve the spectral resolution, but strongly reduce the total system throughput. We explore how photometric redshift performance depends on the number of filters n_f, characterizing the survey depth through the fraction of galaxies with unambiguous redshift estimates. For a combination of total exposure time and telescope imaging area of 270 hrs m^2, 4-5 filter systems perform significantly worse, both in completeness depth and precision, than systems with n_f >= 8 filters. Our results suggest that for low n_f, the color-redshift degeneracies overwhelm the improvements in photometric depth, and that even at higher n_f, the effective photometric redshift depth decreases much more slowly with filter width than naively expected from the reduction in S/N. Adding near-IR observations improves the performance of low n_f systems, but still the system which maximizes the photometric redshift completeness is formed by 9 filters with logarithmically increasing bandwidth (constant resolution) and half-band overlap, reaching ~0.7 mag deeper, with 10% better redshift precision, than 4-5 filter systems. A system with 20 constant-width, non-overlapping filters reaches only ~0.1 mag shallower than 4-5 filter systems, but has a precision almost 3 times better, dz = 0.014(1+z) vs. dz = 0.042(1+z). We briefly discuss a practical implementation of such a photometric system: the ALHAMBRA survey.

  18. Optimal filter bandwidth for pulse oximetry

    NASA Astrophysics Data System (ADS)

    Stuban, Norbert; Niwayama, Masatsugu

    2012-10-01

    Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.

  19. On the Distance to Optimality of the Geometric Approximate Minimum-Energy Attitude Filter

    E-print Network

    Trumpf, Jochen

    On the Distance to Optimality of the Geometric Approximate Minimum-Energy Attitude Filter Mohammad-optimality of the recent geometric approximate minimum-energy (GAME) filter, an attitude filter for estimation on the rotation group SO(3). The GAME filter approximates the minimum-energy (optimal) filtering solution

  20. A novel gradient adaptive step size LMS algorithm with dual adaptive filters.

    PubMed

    Jiao, Yuzhong; Cheung, Rex Y P; Chow, Winnie W Y; Mok, Mark P C

    2013-01-01

    Least mean square (LMS) adaptive filter has been used to extract life signals from serious ambient noises and interferences in biomedical applications. However, a LMS adaptive filter with a fixed step size always suffers from slow convergence rate or large signal distortion due to the diversity of the application environments. An ideal adaptive filtering system should be able to adapt different environments and obtain the useful signals with low distortion. Adaptive filter with gradient adaptive step size is therefore more desirable in order to meet the demands of adaptation and convergence rate, which adjusts the step-size parameter automatically by using gradient descent technique. In this paper, a novel gradient adaptive step size LMS adaptive filter is presented. The proposed algorithm utilizes two adaptive filters to estimate gradients accurately, thus achieves good adaptation and performance. Though it uses two LMS adaptive filters, it has a low computational complexity. An active noise cancellation (ANC) system with two applications for extracting heartbeat and lung sound signals from noises is used to simulate the performance of the proposed algorithm. PMID:24110809

  1. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  2. Bayes optimal template matching for spike sorting - combining fisher discriminant analysis with optimal filtering.

    PubMed

    Franke, Felix; Quian Quiroga, Rodrigo; Hierlemann, Andreas; Obermayer, Klaus

    2015-06-01

    Spike sorting, i.e., the separation of the firing activity of different neurons from extracellular measurements, is a crucial but often error-prone step in the analysis of neuronal responses. Usually, three different problems have to be solved: the detection of spikes in the extracellular recordings, the estimation of the number of neurons and their prototypical (template) spike waveforms, and the assignment of individual spikes to those putative neurons. If the template spike waveforms are known, template matching can be used to solve the detection and classification problem. Here, we show that for the colored Gaussian noise case the optimal template matching is given by a form of linear filtering, which can be derived via linear discriminant analysis. This provides a Bayesian interpretation for the well-known matched filter output. Moreover, with this approach it is possible to compute a spike detection threshold analytically. The method can be implemented by a linear filter bank derived from the templates, and can be used for online spike sorting of multielectrode recordings. It may also be applicable to detection and classification problems of transient signals in general. Its application significantly decreases the error rate on two publicly available spike-sorting benchmark data sets in comparison to state-of-the-art template matching procedures. Finally, we explore the possibility to resolve overlapping spikes using the template matching outputs and show that they can be resolved with high accuracy. PMID:25652689

  3. Ares-I Bending Filter Design using a Constrained Optimization Approach

    NASA Technical Reports Server (NTRS)

    Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth

    2008-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.

  4. Multilayer Stepped-Impedance Resonator Band-Pass Filter Implementing Using Low Temperature Cofired Ceramic Structure

    NASA Astrophysics Data System (ADS)

    Chen, Lih-Shan; Weng, Min-Hung; Huang, Tsung-Hui; Chen, Han-Jan; Su, Sheng-Fu; Houng, Mau-Phon

    2004-10-01

    A tapped-line stepped-impedance resonator band-pass filter was implemented using a low temperature cofired multilayer-ceramic structure. By constructing a multilayer structure, a compact band-pass filter was realized. Moreover, the multilayer structure demonstrated an extra cross-coupling effect that produced extra transmission zeros in the stopband and, hence, realized a highly steep passband skirt. The center frequency of the fabricated band-pass filter was 6.075 GHz and the 3 dB fractional bandwidth was 18%. The measured insertion loss and return loss of the filter were -0.31 dB and -28 dB, respectively. The measured response of the fabricated band-pass filter was in good agreement with simulated results.

  5. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  6. Design of optimal correlation filters for hybrid vision systems

    NASA Technical Reports Server (NTRS)

    Rajan, Periasamy K.

    1990-01-01

    Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

  7. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  8. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  9. Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation

    PubMed Central

    Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

    2013-01-01

    This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

  10. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  11. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  12. An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images

    PubMed Central

    Coupé, Pierrick; Yger, Pierre; Prima, Sylvain; Hellier, Pierre; Kervrann, Charles; Barillot, Christian

    2008-01-01

    A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3D optimized blockwise version of the Non Local (NL) means filter [1]. The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2D images, but reducing the computational burden is a critical aspect to extend the method to 3D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully-automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: (a) an automatic tuning of the smoothing parameter, (b) a selection of the most relevant voxels, (c) a blockwise implementation and (d) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb [2]. The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods (Anisotropic Diffusion [3] and Total Variation minimization process [4]) in terms of accuracy (measured by the Peak Signal to Noise Ratio) with low computation time. Finally, qualitative results on real data are presented. PMID:18390341

  13. Optimization of filtering schemes for broadband astro-combs.

    PubMed

    Chang, Guoqing; Li, Chih-Hao; Phillips, David F; Szentgyorgyi, Andrew; Walsworth, Ronald L; Kärtner, Franz X

    2012-10-22

    To realize a broadband, large-line-spacing astro-comb, suitable for wavelength calibration of astrophysical spectrographs, from a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three additional components: (1) one or more filter cavities to multiply the source-comb's repetition rate and thus line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highly nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb. In this paper we analyze the interplay of Fabry-Perot (FP) filter cavities with power amplifiers and nonlinear broadening fiber in the design of astro-combs optimized for radial-velocity (RV) calibration accuracy. We present analytic and numeric models and use them to evaluate a variety of FP filtering schemes (labeled as identical, co-prime, fraction-prime, and conjugate cavities), coupled to chirped-pulse amplification (CPA). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase RV error for spectrograph calibration. In general, filtering with two cavities prior to the CPA fiber amplifier outperforms an amplifier placed between the two cavities. In particular, filtering with conjugate cavities is able to provide <1 cm/s RV calibration error with >300 nm wavelength coverage. Such superior performance will facilitate the search for and characterization of Earth-like exoplanets, which requires <10 cm/s RV calibration error. PMID:23187265

  14. Two-step intensity modulated arc therapy (2-step IMAT) with segment weight and width optimization

    PubMed Central

    2011-01-01

    Background 2-step intensity modulated arc therapy (IMAT) is a simplified IMAT technique which delivers the treatment over typically two continuous gantry rotations. The aim of this work was to implement the technique into a computerized treatment planning system and to develop an approach to optimize the segment weights and widths. Methods 2-step IMAT was implemented into the Prism treatment planning system. A graphical user interface was developed to generate the plan segments automatically based on the anatomy in the beam's-eye-view. The segment weights and widths of 2-step IMAT plans were subsequently determined in Matlab using a dose-volume based optimization process. The implementation was tested on a geometric phantom with a horseshoe shaped target volume and then applied to a clinical paraspinal tumour case. Results The phantom study verified the correctness of the implementation and showed a considerable improvement over a non-modulated arc. Further improvements in the target dose uniformity after the optimization of 2-step IMAT plans were observed for both the phantom and clinical cases. For the clinical case, optimizing the segment weights and widths reduced the maximum dose from 114% of the prescribed dose to 107% and increased the minimum dose from 87% to 97%. This resulted in an improvement in the homogeneity index of the target dose for the clinical case from 1.31 to 1.11. Additionally, the high dose volume V105 was reduced from 57% to 7% while the maximum dose in the organ-at-risk was decreased by 2%. Conclusions The intuitive and automatic planning process implemented in this study increases the prospect of the practical use of 2-step IMAT. This work has shown that 2-step IMAT is a viable technique able to achieve highly conformal plans for concave target volumes with the optimization of the segment weights and widths. Future work will include planning comparisons of the 2-step IMAT implementation with fixed gantry intensity modulated radiotherapy (IMRT) and commercial IMAT implementations. PMID:21631957

  15. Particulate Flow over a Backward Facing Step Preceding a Filter Medium

    NASA Astrophysics Data System (ADS)

    Chambers, Frank; Ravi, Krishna

    2010-11-01

    Computational Fluid Dynamic predictions were performed for particulate flows over a backward facing step with and without a filter downstream. The carrier phase was air and the monodisperse particles were dust with diameters of 1 to 50 microns. The step expansion ratio was 2:1, and the filter was located at 4.25 and 6.75 step heights downstream. Computations were performed for Reynolds numbers of 6550 and 10000. The carrier phase turbulence was modeled using the k-epsilon RNG model. The particles were modeled using a discrete phase model and particle dispersion was modeled using stochastic tracking. The filter was modeled as a porous medium, and the porous jump boundary condition was used. The particle boundary condition applied at the walls was "reflect" and at the filter was "trap." The presence of the porous medium showed a profound effect on the recirculation zone length, velocity profiles, and particle trajectories. The velocity profiles were compared to experiments. As particle size increased, the number of particles entering the recirculation zone decreased. The filter at the farther downstream location promoted more particles becoming trapped in the recirculation zone.

  16. Web image annotation using two-step filtering on social tags

    NASA Astrophysics Data System (ADS)

    Cho, Sunyoung; Cha, Jaeseong; Byun, Hyeran

    2011-03-01

    Web image annotation has become an important issue with exploding web images and the necessity of effective image search. The social tags have recently utilized at image annotation because they can reflect the user's tagging tendency, and reduce the semantic gap. However, an effective filtering procedure is required to extract the relevant tags since the user's subjectivity and noisy tags. In this paper, we propose a two-step filtering on social tags for image annotation. This method conducts the filtering and verification tasks by analyzing the tags of visual neighbor images using voting method and co-occurrence analysis. Our method consists of the following three steps: 1) the tag candidate set is founded by searching the visual neighbor images, 2) from a given tag candidate set, coarse filtering is conducted by tag grouping and voting technique, 3) the dense filtering is conducted by using similarity verification for coarse filtered candidate tag set. To evaluate the performance of our approach, we conduct the experiments on a social-tagged image dataset obtained from Flickr. We compare the annotation accuracy between the voting method and our proposed method. Our experimental results show that our method has an improvement in image annotation.

  17. Multidisciplinary Analysis and Optimization Generation 1 and Next Steps

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia Gutierrez

    2008-01-01

    The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.

  18. Optimal filtering in multipulse sequences for nuclear quadrupole resonance detection

    NASA Astrophysics Data System (ADS)

    Osokin, D. Ya.; Khusnutdinov, R. R.; Mozzhukhin, G. V.; Rameev, B. Z.

    2014-05-01

    The application of the multipulse sequences in nuclear quadrupole resonance (NQR) detection of explosive and narcotic substances has been studied. Various approaches to increase the signal to noise ratio (SNR) of signal detection are considered. We discussed two modifications of the phase-alternated multiple-pulse sequence (PAMS): the 180° pulse sequence with a preparatory pulse and the 90° pulse sequence. The advantages of optimal filtering to detect NQR in the case of the coherent steady-state precession have been analyzed. It has been shown that this technique is effective in filtering high-frequency and low-frequency noise and increasing the reliability of NQR detection. Our analysis also shows the PAMS with 180° pulses is more effective than PSL sequence from point of view of the application of optimal filtering procedure to the steady-state NQR signal.

  19. Optimal Correlation Filters for Images with Signal-Dependent Noise

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Walkup, John F.

    1994-01-01

    We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.

  20. Na-Faraday rotation filtering: The optimal point

    PubMed Central

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  1. Degeneracy, frequency response and filtering in IMRT optimization.

    PubMed

    Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D; Promberger, Claus

    2004-07-01

    This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques. PMID:15285252

  2. Optimal color image restoration: Wiener filter and quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.

  3. Quasi-Elliptic Dual-Band Stepped-Impedance Filters With Folded Parallel High-Impedance Segments

    E-print Network

    Bornemann, Jens

    Quasi-Elliptic Dual-Band Stepped-Impedance Filters With Folded Parallel High-Impedance Segments-- A design approach is presented for dual-band filters formed by cascaded stepped-impedance resonators in microstrip technology. The resonators feature folded parallel high- impedance sections which permit resonator

  4. Fuzzy membership function optimization for system identification using an extended Kalman filter

    E-print Network

    Simon, Dan

    Fuzzy membership function optimization for system identification using an extended Kalman filter an extended Kalman filter to optimize the membership functions for system modeling, or system identification is that the proposed system acts as a noise-reducing filter. We demonstrate that the extended Kalman filter can

  5. Performance evaluation of iterated extended Kalman filter with variable step-length

    NASA Astrophysics Data System (ADS)

    Havlík, Jind?ich; Straka, Ond?ej

    2015-11-01

    The paper deals with state estimation of nonlinear stochastic dynamic systems. In particular, the iterated extended Kalman filter is studied. Three recently proposed iterated extended Kalman filter algorithms are analyzed in terms of their performance and specification of a user design parameter, more specifically the step-length size. The performance is compared using the root mean square error evaluating the state estimate and the noncredibility index assessing covariance matrix of the estimate error. The performance and influence of the design parameter, are analyzed in a numerical simulation.

  6. "The Design of a Compact, Wide Spurious-Suppression Bandwidth Bandpass Filter Using Stepped Impedance Resonators"

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop; Wollack, Edward J.; Doiron, Terence; Papapolymerou, John; Laskar, Joy

    2005-01-01

    We propose an analytical design for a microstrip broadband spurious-suppression filter. The proposed design uses every section of the transmission lines as both a coupling and a spurious suppression element, which creates a very compact, planar filter. While a traditional filter length is greater than the multiple of the quarter wavelength at the center passband frequency (lambda(sub g)/4), the proposed filter length is less than (order n(Ssup th) + l)center dot lambda(sub g)/8. The filter s spurious response and physical dimension are controlled by the step impedance ratio (R) between two transmission line sections as a lambda(sub g)/4 resonator. The experimental result shows that, with R of 0.2, the out-of-band attenuation is greater than 40 dB; and the first spurious mode is shifted to more than 5 times the fundamental frequency. Moreover, it is the most compact planar filter design to date. The results also indicate a low in-band insertion loss.

  7. A multi-dimensional procedure for BNCT filter optimization

    SciTech Connect

    Lille, R.A.

    1998-02-01

    An initial version of an optimization code utilizing two-dimensional radiation transport methods has been completed. This code is capable of predicting material compositions of a beam tube-filter geometry which can be used in a boron neutron capture therapy treatment facility to improve the ratio of the average radiation dose in a brain tumor to that in the healthy tissue surrounding the tumor. The optimization algorithm employed by the code is very straightforward. After an estimate of the gradient of the dose ratio with respect to the nuclide densities in the beam tube-filter geometry is obtained, changes in the nuclide densities are made based on: (1) the magnitude and sign of the components of the dose ratio gradient, (2) the magnitude of the nuclide densities, (3) the upper and lower bound of each nuclide density, and (4) the linear constraint that the sum of the nuclide density fractions in each material zone be less than or equal to 1.0. A local optimal solution is assumed to be found when one of the following conditions is satisfied in every material zone: (1) the maximum positive component of the gradient corresponds to a nuclide at its maximum density and the sum of the density fractions equals 1.0 or, and (2) the positive and negative components of the gradient correspond to nuclides densities at their upper and lower bounds, respectively, and the remaining components of the gradient are sufficiently small. The optimization procedure has been applied to a beam tube-filter geometry coupled to a simple tumor-patient head model and an improvement of 50% in the dose ratio was obtained.

  8. Particle swarm optimization-based approach for optical finite impulse response filter design

    E-print Network

    Wu, Shin-Tson

    Particle swarm optimization-based approach for optical finite impulse response filter design Ying method for the design of an optical finite impulse response FIR filter by employing a particle swarm optimization technique. With the method proposed, the design of an optical FIR filter, which is able to provide

  9. Optimal initial perturbations for El Nino ensemble prediction with ensemble Kalman filter

    E-print Network

    Kang, In-Sik

    Optimal initial perturbations for El Nino ensemble prediction with ensemble Kalman filter Yoo of an ensemble Kalman filter (EnKF). Among the initial conditions gene- rated by EnKF, ensemble members with fast. Keywords Ensemble Kalman filter Á Seasonal prediction Á Optimal initial perturbation Á Ensemble prediction

  10. On an Optimal Number of Time Steps for a Sequential Solution of an Elliptic-Hyperbolic

    E-print Network

    On an Optimal Number of Time Steps for a Sequential Solution of an Elliptic-Hyperbolic System) for the coupled system. We provide two procedures aimed at the estimation of an optimal set of time steps, and show that the resulting distribution of time steps yields better results than using equidistant time

  11. ECHO CANCELLATION BY GLOBAL OPTIMIZATION OF KAUTZ FILTERS USING AN INFORMATION THEORETIC CRITERION

    E-print Network

    Slatton, Clint

    ECHO CANCELLATION BY GLOBAL OPTIMIZATION OF KAUTZ FILTERS USING AN INFORMATION THEORETIC CRITERION to ensure global optimization. 1. INTRODUCTION Echo cancellation is an important practical problem whose parameters of an adaptive IIR filter to achieve global optimization, yet still use gradient descent. Recently

  12. Estimation of the error for small-sample optimal binary filter design using prior knowledge 

    E-print Network

    Sabbagh, David L

    1999-01-01

    Optimal binary filters estimate an unobserved ideal quantity from observed quantities. Optimality is with respect to some error criterion, which is usually mean absolute error MAE (or equivalently mean square error) for the binary values. Both...

  13. Effect of embedded unbiasedness on discrete-time optimal FIR filtering estimates

    NASA Astrophysics Data System (ADS)

    Zhao, Shunyi; Shmaliy, Yuriy S.; Liu, Fei; Ibarra-Manzano, Oscar; Khan, Sanowar H.

    2015-12-01

    Unbiased estimation is an efficient alternative to optimal estimation when the noise statistics are not fully known and/or the model undergoes temporary uncertainties. In this paper, we investigate the effect of embedded unbiasedness (EU) on optimal finite impulse response (OFIR) filtering estimates of linear discrete time-invariant state-space models. A new OFIR-EU filter is derived by minimizing the mean square error (MSE) subject to the unbiasedness constraint. We show that the OFIR-UE filter is equivalent to the minimum variance unbiased FIR (UFIR) filter. Unlike the OFIR filter, the OFIR-EU filter does not require the initial conditions. In terms of accuracy, the OFIR-EU filter occupies an intermediate place between the UFIR and OFIR filters. Contrary to the UFIR filter which MSE is minimized by the optimal horizon of N opt points, the MSEs in the OFIR-EU and OFIR filters diminish with N and these filters are thus full-horizon. Based upon several examples, we show that the OFIR-UE filter has higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR and Kalman filters.

  14. Optimization of the performances of correlation filters by pre-processing the input plane

    NASA Astrophysics Data System (ADS)

    Bouzidi, F.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Fakhfakh, A.

    2016-01-01

    We report findings on the optimization of the performances of correlation filters. First, we propound and validate an optimization of ROC curves adapted to correlation technique. Then, analysis suggests that a pre-processing of the input plane leads to a compromise between the robustness of the adapted filter and the discrimination of the inverse filter for face recognition applications. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase the performances of a VanderLugt correlator.

  15. Optimal design of multichannel fiber Bragg grating filters using Pareto multi-objective optimization algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Liu, Tundong; Jiang, Hao

    2016-01-01

    A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.

  16. Pattern recognition with composite correlation filters designed with multi-objective combinatorial optimization

    NASA Astrophysics Data System (ADS)

    Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul

    2015-03-01

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  17. Aalborg Universitet Optimal Design of High-Order Passive-Damped Filters for Grid-Connected Applications

    E-print Network

    Bak, Claus Leth

    for the LCL filters and LCL with multituned LC traps. In short, the optimization problem reduces to the proper for verification. Index Terms--Harmonic passive filters, LCL filter, resonance damping, trap filter, voltage is the LCL filter, which became well accepted and widely used as an in- terface between renewable energy

  18. A Two-Step Filtering approach for detecting maize and soybean phenology with time-series MODIS data

    E-print Network

    Gitelson, Anatoly

    A Two-Step Filtering approach for detecting maize and soybean phenology with time-series MODIS data Soybean MODIS Shape-model fitting The crop developmental stage represents essential information) for detecting the phenological stages of maize and soybean from time-series Wide Dynamic Range Vegetation Index

  19. Bio-desulfurization of biogas using acidic biotrickling filter with dissolved oxygen in step feed recirculation.

    PubMed

    Chaiprapat, Sumate; Charnnok, Boonya; Kantachote, Duangporn; Sung, Shihwu

    2015-03-01

    Triple stage and single stage biotrickling filters (T-BTF and S-BTF) were operated with oxygenated liquid recirculation to enhance bio-desulfurization of biogas. Empty bed retention time (EBRT 100-180 s) and liquid recirculation velocity (q 2.4-7.1 m/h) were applied. H2S removal and sulfuric acid recovery increased with higher EBRT and q. But the highest q at 7.1 m/h induced large amount of liquid through the media, causing a reduction in bed porosity in S-BTF and H2S removal. Equivalent performance of S-BTF and T-BTF was obtained under the lowest loading of 165 gH2S/m(3)/h. In the subsequent continuous operation test, it was found that T-BTF could maintain higher H2S elimination capacity and removal efficiency at 175.6±41.6 gH2S/m(3)/h and 89.0±6.8% versus S-BTF at 159.9±42.8 gH2S/m(3)/h and 80.1±10.2%, respectively. Finally, the relationship between outlet concentration and bed height was modeled. Step feeding of oxygenated liquid recirculation in multiple stages clearly demonstrated an advantage for sulfide oxidation. PMID:25569031

  20. Optimal and unbiased FIR filtering in discrete time state space with smoothing and predictive properties

    NASA Astrophysics Data System (ADS)

    Shmaliy, Yuriy S.; Ibarra-Manzano, Oscar

    2012-12-01

    We address p-shift finite impulse response optimal (OFIR) and unbiased (UFIR) algorithms for predictive filtering ( p > 0), filtering ( p = 0), and smoothing filtering ( p < 0) at a discrete point n over N neighboring points. The algorithms were designed for linear time-invariant state-space signal models with white Gaussian noise. The OFIR filter self-determines the initial mean square state function by solving the discrete algebraic Riccati equation. The UFIR one represented both in the batch and iterative Kalman-like forms does not require the noise covariances and initial errors. An example of applications is given for smoothing and predictive filtering of a two-state polynomial model. Based upon this example, we show that exact optimality is redundant when N ? 1 and still a nice suboptimal estimate can fairly be provided with a UFIR filter at a much lower cost.

  1. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J. (Erie, CO); Kapteyn, Henry C. (Boulder, CO)

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  2. The optimal design of photonic crystal optical devices with step-wise linear refractive index

    NASA Astrophysics Data System (ADS)

    Ma, Ji; Wu, Xiang-Yao; Li, Hai-Bo; Li, Hong; Liu, Xiao-Jing; Zhang, Si-Qi; Chen, Wan-Jin; Wu, Yi-Heng

    2015-10-01

    In the paper, we have studied one-dimensional step-wise linear photonic crystal with and without defect layer, and analyzed the effect of defect layer position, thickness, refractive index real part and imaginary part on the transmissivity, electric field distribution and output electric field intensity. By calculation, we have obtained a set of optimal parameters, which can be optimally designed optical device, such as optical amplifier, attenuator, optical diode by the step-wise linear photonic crystal.

  3. Statistical Design and Optimization for Adaptive Post-silicon Tuning of MEMS Filters

    E-print Network

    Li, Xin

    Statistical Design and Optimization for Adaptive Post-silicon Tuning of MEMS Filters Fa Wang, Gokce of microelectro-mechanical systems (MEMS) for RF (radio frequency) applications. In this paper we describe a novel technique of adaptive post-silicon tuning to reliably design MEMS filters that are robust to process

  4. Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization

    E-print Network

    Cho, Sung-Bae

    Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization t i c l e i n f o Keywords: Fingerprint image generation Evolutionary algorithm Image filters Input pressure a b s t r a c t Constructing a fingerprint database is important to evaluate the performance

  5. Optimizing The Number Of Steps In Learning Tasks For Complex Skills

    ERIC Educational Resources Information Center

    Nadolski, Rob J.; Kirschner, Paul A.; van Merrienboer, Jeroen J.G.

    2005-01-01

    Background: Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. Aim: The aim of the study is to investigate the relation between the number of…

  6. Optimization of filtering schemes for broadband astro-combs

    E-print Network

    Walsworth, Ronald L.

    nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase

  7. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  8. Optease Vena Cava Filter Optimal Indwelling Time and Retrievability

    SciTech Connect

    Rimon, Uri Bensaid, Paul Golan, Gil Garniek, Alexander Khaitovich, Boris; Dotan, Zohar; Konen, Eli

    2011-06-15

    The purpose of this study was to assess the indwelling time and retrievability of the Optease IVC filter. Between 2002 and 2009, a total of 811 Optease filters were inserted: 382 for prophylaxis in multitrauma patients and 429 for patients with venous thromboembolic (VTE) disease. In 139 patients [97 men and 42 women; mean age, 36 (range, 17-82) years], filter retrieval was attempted. They were divided into two groups to compare change in retrieval policy during the years: group A, 60 patients with filter retrievals performed before December 31 2006; and group B, 79 patients with filter retrievals from January 2007 to October 2009. A total of 128 filters were successfully removed (57 in group A, and 71 in group B). The mean filter indwelling time in the study group was 25 (range, 3-122) days. In group A the mean indwelling time was 18 (range, 7-55) days and in group B 31 days (range, 8-122). There were 11 retrieval failures: 4 for inability to engage the filter hook and 7 for inability to sheathe the filter due to intimal overgrowth. The mean indwelling time of group A retrieval failures was 16 (range, 15-18) days and in group B 54 (range, 17-122) days. Mean fluoroscopy time for successful retrieval was 3.5 (range, 1-16.6) min and for retrieval failures 25.2 (range, 7.2-62) min. Attempts to retrieve the Optease filter can be performed up to 60 days, but more failures will be encountered with this approach.

  9. Robustness of optimal binary filters: analysis and design 

    E-print Network

    Grigoryan, Artyom M

    1999-01-01

    designed. This problem is crucial for practical application since filters will always be applied to image processes that deviate from design processes. The present work treats the general concept of robust binary alters in the Bayesian framework, derives...

  10. Optimized filtering of regional and teleseismic seismograms: results of maximizing SNR measurements from the wavelet transform and filter banks

    SciTech Connect

    Leach, R.R.; Schultz, C.; Dowla, F.

    1997-07-15

    Development of a worldwide network to monitor seismic activity requires deployment of seismic sensors in areas which have not been well studied or may have from available recordings. Development and testing of detection and discrimination algorithms requires a robust representative set of calibrated seismic events for a given region. Utilizing events with poor signal-to-noise (SNR) can add significant numbers to usable data sets, but these events must first be adequately filtered. Source and path effects can make this a difficult task as filtering demands are highly varied as a function of distance, event magnitude, bearing, depth etc. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. In addition, filter parameters are often overly generalized or contain complicated switching. We have developed a method to provide an optimized filter for any regional or teleseismically recorded event. Recorded seismic signals contain arrival energy which is localized in frequency and time. Localized temporal signals whose frequency content is different from the frequency content of the pre-arrival record are identified using rms power measurements. The method is based on the decomposition of a time series into a set of time series signals or scales. Each scale represents a time-frequency band with a constant Q. SNR is calculated for a pre-event noise window and for a window estimated to contain the arrival. Scales with high SNR are used to indicate the band pass limits for the optimized filter.The results offer a significant improvement in SNR particularly for low SNR events. Our method provides a straightforward, optimized filter which can be immediately applied to unknown regions as knowledge of the geophysical characteristics is not required. The filtered signals can be used to map the seismic frequency response of a region and may provide improvements in travel-time picking, bearing estimation regional characterization, and event detection. Results are shown for a set of low SNR events as well as 92 regional and teleseismic events in the Middle East.

  11. Hybridizing Particle Filters and Population-based Metaheuristics for Dynamic Optimization Problems

    E-print Network

    Pantrigo Fernández, Juan José

    -reconstruction procedure [15]. On the other hand, many dynamic problems require the estimation of the system stateHybridizing Particle Filters and Population-based Metaheuristics for Dynamic Optimization Problems Many real-world optimization problems are dynamic. These problems require from powerful methods

  12. Implicit Filtering for Constrained Optimization and Applications to Problems in the Natural Gas Pipeline

    E-print Network

    Kelley, C. T. "Tim"

    Application of IFFCO to Optimization of Natural Gas Pipelines 12 4 Hidden Constrants 15 4.1 DefinitionImplicit Filtering for Constrained Optimization and Applications to Problems in the Natural Gas Pipeline Industry 1 Alton Patrick Department of Mathematics Center for Research in Scientific Computation

  13. Linear adaptive noise-reduction filters for tomographic imaging: Optimizing for minimum mean square error

    SciTech Connect

    Sun, W Y

    1993-04-01

    This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

  14. Optimized visualization of phase objects with semiderivative real filters

    NASA Astrophysics Data System (ADS)

    Sagan, Arkadiusz; Kowalczyk, Marek; Szoplik, Tomasz

    2004-01-01

    There is a need for a frequency-domain real filter that visualizes pure-phase objects with thickness either considerably smaller or much bigger than 2? rad and gives output image irradiance proportional to the first derivative of object phase function for a wide range of phase gradients. We propose to construct a nonlinearly graded filter as a combination of Foucault and the square-root filters. The square root filter in frequency plane corresponds to the semiderivative in object space. Between the two half-planes with binary values of amplitude transmittance a segment with nonlinearly varying transmittance is located. Within this intermediate sector the amplitude transmittance is given with a biased antisymmetrical function whose positive and negative frequency branches are proportional to the square-root of spatial frequencies contained therein. Our simulations show that the modified square root filter visualizes both thin and thick pure phase objects with phase gradients from 0.6? up to more than 60? rad/mm.

  15. Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared spectroscopy

    PubMed Central

    Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.

    2011-01-01

    An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm?1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm?1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

  16. Optimization of soft-morphological filters by genetic algorithms

    NASA Astrophysics Data System (ADS)

    Huttunen, Heikki; Kuosmanen, Pauli; Koskinen, Lasse; Astola, Jaakko T.

    1994-06-01

    In this work we present a new approach to robust image modeling. the proposed method is based on M-estimation algorithms. However, unlike in other M-estimator based image processing algorithms, the new algorithm takes into consideration spatial relations between picture elements. The contribution of the sample to the model depends not only on the current residual of that sample, but also on the neighboring residuals. In order to test the proposed algorithm we apply it to an image filtering problem, where images are modeled as piecewise polynomials. We show that the filter based on our algorithm has excellent detail preserving properties while suppressing additive Gaussian and impulsive noise very efficiently.

  17. On the application of optimal wavelet filter banks for ECG signal classification

    NASA Astrophysics Data System (ADS)

    Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvão, R. K. H.

    2014-03-01

    This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.

  18. Optimal filtering of solar images using soft morphological processing techniques

    NASA Astrophysics Data System (ADS)

    Marshall, S.; Fletcher, L.; Hough, K.

    2006-10-01

    Context: .CCD images obtained by space-based astronomy and solar physics are frequently spoiled by galactic and solar cosmic rays, and particles in the Earth's radiation belt, which produces an overlaid, often saturated, speckle. Aims: .We describe the development and application of a new image-processing technique for the removal of this noise source, and apply it to SOHO/LASCO coronagraph images. Methods: .We employ soft morphological filters, a branch of non-linear image processing originating from the field of mathematical morphology, which are particularly effective for noise removal. Results: .The soft morphological filters result in a significant improvement in image quality, and perform significantly better than other currently existing methods based on frame comparison, thresholding, or simple morphologies. Conclusions: .This is a promising and adaptable technique that should be extendable to other space-based solar and astronomy datasets.

  19. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486

  20. Optimal quantum control of Bose-Einstein condensates in magnetic microtraps: Consideration of filter effects

    E-print Network

    Georg Jäger; Ulrich Hohenester

    2013-09-07

    We theoretically investigate protocols based on optimal control theory (OCT) for manipulating Bose-Einstein condensates in magnetic microtraps, using the framework of the Gross-Pitaevskii equation. In our approach we explicitly account for filter functions that distort the computed optimal control, a situation inherent to many experimental OCT implementations. We apply our scheme to the shakeup process of a condensate from the ground to the first excited state, following a recent experimental and theoretical study, and demonstrate that the fidelity of OCT protocols is not significantly deteriorated by typical filters.

  1. Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

    USGS Publications Warehouse

    Safak, Erdal

    1989-01-01

    This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

  2. Insights into HER2 signaling from step-by-step optimization of anti-HER2 antibodies.

    PubMed

    Fu, Wenyan; Wang, Yuxiao; Zhang, Yunshan; Xiong, Lijuan; Takeda, Hiroaki; Ding, Li; Xu, Qunfang; He, Lidong; Tan, Wenlong; Bethune, Augus N; Zhou, Lijun

    2014-01-01

    HER2, a ligand-free tyrosine kinase receptor of the HER family, is frequently overexpressed in breast cancer. The anti-HER2 antibody trastuzumab has shown significant clinical benefits in metastatic breast cancer; however, resistance to trastuzumab is common. The development of monoclonal antibodies that have complementary mechanisms of action results in a more comprehensive blockade of ErbB2 signaling, especially HER2/HER3 signaling. Use of such antibodies may have clinical benefits if these antibodies can become widely accepted. Here, we describe a novel anti-HER2 antibody, hHERmAb-F0178C1, which was isolated from a screen of a phage display library. A step-by-step optimization method was employed to maximize the inhibitory effect of this anti-HER2 antibody. Crystallographic analysis was used to determine the three-dimensional structure to 3.5 Å resolution, confirming that the epitope of this antibody is in domain III of HER2. Moreover, this novel anti-HER2 antibody exhibits superior efficacy in blocking HER2/HER3 heterodimerization and signaling, and its use in combination with pertuzumab has a synergistic effect. Characterization of this antibody revealed the important role of a ligand binding site within domain III of HER2. The results of this study clearly indicate the unique potential of hHERmAb-F0178C1, and its complementary inhibition effect on HER2/HER3 signaling warrants its consideration as a promising clinical treatment. PMID:24838231

  3. Agent-mediated Multi-step Optimization for Resource Allocation in Distributed Sensor Networks

    E-print Network

    Massachusetts at Amherst, University of

    in Oklahoma to ob- serve severe weather events. Categories and Subject Descriptors I.2.11 [DistributedAgent-mediated Multi-step Optimization for Resource Allocation in Distributed Sensor Networks Bo An {ban,lesser,westy}@cs.umass.edu Michael Zink Dept. of Electrical and Computer Engineering University

  4. Optimized one-step preparation of a bioactive natural product, guaiazulene-2,9-dione

    NASA Astrophysics Data System (ADS)

    Cheng, Canling; Li, Pinglin; Wang, Wei; Shi, Xuefeng; Zhang, Gang; Zhu, Hongyan; Wu, Rongcui; Tang, Xuli; Li, Guoqiang

    2014-12-01

    We previously isolated a natural product, namely guaiazulene-2,9-dione showing strong antibacterial activity against Vibrio anguillarum, from a gorgonian Muriceides collaris collected in South China Sea. In this experiment, guaiazulene-2,9-dione was quantitatively synthesized with an optimized one-step bromine oxidation method using guaiazulene as the raw material. The key reaction condition including reaction time and temperature, drop rate of bromine, concentration of aqueous THF solution, respective molar ratio of guaiazulene to bromine and acetic acid, and concentration of guaiazulene in aqueous THF solution, were investigated individually at five levels each for optimization. Combined with the verification test to show the absolute yield of each optimization step, the final optimal condition was determined as: when a solution of 0.025 mmol mL-1 guaiazulene in 80% aqueous THF was treated with four volumes of bromine at a drop rate of 0.1 mL min-1 and four volumes of acetic acid at -5°C for three hours, the yield of guaiazulene-2,9-dione was 23.72%. This was the first report concerning optimized one-step synthesis to provide a convenient method for the large preparation of guaiazulene-2,9-dione.

  5. Optimal filtering for spike sorting of multi-site electrode recordings.

    PubMed

    Vollgraf, Roland; Munk, Matthias; Obermayer, Klaus

    2005-03-01

    We derive an optimal linear filter, to reduce the distortions of the peak amplitudes of action potentials in extracellular multitrode recordings, which are due to background activity and overlapping spikes. This filter is being learned very efficiently from the raw recordings in an unsupervised manner and responds to the average waveform with an impulse of minimal width. The average waveform does not have to be known in advance, but is learned together with the optimal filter. The peak amplitude of a filtered waveform is a more reliable estimate for the amplitude of an action potential than the peak of the biphasic waveform and can improve the accuracy of the event detection and clustering procedures. We demonstrate a spike-sorting application, in which events are detected using the Mahalanobis distance in the N-dimensional space of filtered recordings as a distance measure, and the event amplitudes of the filtered recordings are clustered to assign events to individual units. This method is fast and robust, and we show its performance by applying it to real tetrode recordings of spontaneous activity in the visual cortex of an anaesthetized cat and to realistic artificial data derived therefrom. PMID:16350435

  6. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method.

    PubMed

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-12-01

    In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8-2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75-150?days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  7. Design and optimization of stepped austempered ductile iron using characterization techniques

    SciTech Connect

    Hernández-Rivera, J.L.; Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J.

    2013-09-15

    Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.

  8. Optimization of high speed pipelining in FPGA-based FIR filter design using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Botella, Guillermo; Romero, David E. T.; Kumm, Martin

    2012-06-01

    This paper compares FPGA-based full pipelined multiplierless FIR filter design options. Comparison of Distributed Arithmetic (DA), Common Sub-Expression (CSE) sharing and n-dimensional Reduced Adder Graph (RAG-n) multiplierless filter design methods in term of size, speed, and A*T product are provided. Since DA designs are table-based and CSE/RAG-n designs are adder-based, FPGA synthesis design data are used for a realistic comparison. Superior results of a genetic algorithm based optimization of pipeline registers and non-output fundamental coefficients are shown. FIR filters (posted as open source by Kastner et al.) for filters in the length from 6 to 151 coefficients are used.

  9. Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin

    E-print Network

    Claridge, Ela

    Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin Stephen J the error associated with histological parameters characterizing normal skin tissue. These parameters can be recovered from digital images of the skin using a physics-based model of skin coloration. The relationship

  10. Project # 3 --332: 406 Control System Design Optimal Control and Kalman Filtering for a Passenger Car

    E-print Network

    Gajic, Zoran

    )C)t h . Determine the closed-loop system eigenvalues. c) Design an observer for this carProject # 3 -- 332: 406 Control System Design Optimal Control and Kalman Filtering for a Passenger Car Project due Thursday April 1, 2004 A mathematical model of a passenger car is given by (Salman

  11. Project # 3 ---332: 406 Control System Design Optimal Control and Kalman Filtering for a Passenger Car

    E-print Network

    Gajic, Zoran

    ) Design an observer for this car with the observer poles (eigenvalues) being much faster than the systemProject # 3 --- 332: 406 Control System Design Optimal Control and Kalman Filtering for a Passenger Car Project due Thursday April 1, 2004 A mathematical model of a passenger car is given by (Salman

  12. DMT Bit Rate Maximization With Optimal Time Domain Equalizer Filter Bank Architecture

    E-print Network

    Evans, Brian L.

    DMT Bit Rate Maximization With Optimal Time Domain Equalizer Filter Bank Architecture Milos-tone (DMT) is a multicarrier modula- tion method in which the available bandwidth of a com- munication create nearly orthogonal subchannels. DMT has been standardized in [1, 2, 3, 4]. A similar multi- carrier

  13. Evaluation of Optimized 3-step Global Reaction Mechanism for CFD Simulations on Sandia Flame D

    NASA Astrophysics Data System (ADS)

    Abou-Taouk, Abdallah; Eriksson, Lars-Erik

    2011-09-01

    The aim of this paper is to evaluate a new optimized 3-step global reaction mechanism (opt) [1] for a methane-air mixture for industry purpose. The global reaction mechanism consists of three reactions corresponding to the fuel oxidation into CO and H2O, and the CO-CO2 equilibrium reaction. Correction functions that are dependent on the local equivalence ratio are introduced into the global mechanism. The optimized 3-step global reaction scheme is adapted into the Computational Fluid Dynamics (CFD) analysis of a partially-premixed piloted methane jet flame. The burner consists of a central nozzle (for premixed fuel/air), surrounded by a premixed pilot flame, and an annular co-flow stream. Both steady-state RANS (Reynolds Averaged Navier Stokes) and time-averaged hybrid URANS/LES (Unsteady RANS/Large Eddy Simulation) results have been computed and compared with experimental results obtained from the Sydney burner at Sandia National Laboratories, Sandia Flame D [2]. The CFD results with the optimized 3-step global reaction mechanism show reasonable agreement with the experimental data based on emission, velocity and temperature profiles, while the 2-step Westbrook Dryer (WD2) [3] global reaction mechanism shows poor agreement with the emission profiles.

  14. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  15. The optimization of continually-operating rotary filters for vacuum and hyperbaric filtration

    SciTech Connect

    Nicolaou, I.

    1995-12-31

    This paper demonstrates the method of approach for the optimization of such filters. This does not only incorporate the proven and simple recently developed equations for the calculation of the solids throughput, the residual moisture and gas throughput, but also introduces equations for the calculation of the specific product costs for both vacuum and hyperbaric filtration under consideration of the influences exerted by the filter compressor and a possible thermal drying. An easy-to-use method is offered which incorporates a special graphic representation concerning the plotting of the target parameters in dependence of the pressure difference with which the optimal process pressure difference can be established, independent of the specific cake permeability or product fineness. Finally, the optimization method is exemplarily demonstrated for a concrete application.

  16. Global localization of 3D anatomical structures by pre-filtered Hough Forests and discrete optimization

    PubMed Central

    Donner, René; Menze, Bjoern H.; Bischof, Horst; Langs, Georg

    2013-01-01

    The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates’ weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450

  17. Decoupled Control Strategy of Grid Interactive Inverter System with Optimal LCL Filter Design

    NASA Astrophysics Data System (ADS)

    Babu, B. Chitti; Anurag, Anup; Sowmya, Tontepu; Marandi, Debati; Bal, Satarupa

    2013-09-01

    This article presents a control strategy for a three-phase grid interactive voltage source inverter that links a renewable energy source to the utility grid through a LCL-type filter. An optimized LCL-type filter has been designed and modeled so as to reduce the current harmonics in the grid, considering the conduction and switching losses at constant modulation index (Ma). The control strategy adopted here decouples the active and reactive power loops, thus achieving desirable performance with independent control of active and reactive power injected into the grid. The startup transients can also be controlled by the implementation of this proposed control strategy: in addition to this, optimal LCL filter with lesser conduction and switching copper losses as well as core losses. A trade-off has been made between the total losses in the LCL filter and the Total Harmonic Distortion (THD%) of the grid current, and the filter inductor has been designed accordingly. In order to study the dynamic performance of the system and to confirm the analytical results, the models are simulated in the MATLAB/Simulink environment, and the results are analyzed.

  18. Design Optimization of Vena Cava Filters: An application to dual filtration devices

    SciTech Connect

    Singer, M A; Wang, S L; Diachin, D P

    2009-12-03

    Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.

  19. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  20. Optimization of single-step tapering amplitude and energy detuning for high-gain FELs

    NASA Astrophysics Data System (ADS)

    Li, He-Ting; Jia, Qi-Ka

    2015-01-01

    We put forward a method to optimize the single-step tapering amplitude of undulator strength and initial energy tuning of electron beam to maximize the saturation power of high gain free-electron lasers (FELs), based on the physics of longitudinal electron beam phase space. Using the FEL simulation code GENESIS, we numerically demonstrate the accuracy of the estimations for parameters corresponding to the linac coherent light source and the Tesla test facility.

  1. AFM tip characterization by using FFT filtered images of step structures.

    PubMed

    Yan, Yongda; Xue, Bo; Hu, Zhenjiang; Zhao, Xuesen

    2016-01-01

    The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods. PMID:26517548

  2. Novel tools for stepping source brachytherapy treatment planning: Enhanced geometrical optimization and interactive inverse planning

    SciTech Connect

    Dinkla, Anna M. Laarse, Rob van der; Koedooder, Kees; Petra Kok, H.; Wieringen, Niek van; Pieters, Bradley R.; Bel, Arjan

    2015-01-15

    Purpose: Dose optimization for stepping source brachytherapy can nowadays be performed using automated inverse algorithms. Although much quicker than graphical optimization, an experienced treatment planner is required for both methods. With automated inverse algorithms, the procedure to achieve the desired dose distribution is often based on trial-and-error. Methods: A new approach for stepping source prostate brachytherapy treatment planning was developed as a quick and user-friendly alternative. This approach consists of the combined use of two novel tools: Enhanced geometrical optimization (EGO) and interactive inverse planning (IIP). EGO is an extended version of the common geometrical optimization method and is applied to create a dose distribution as homogeneous as possible. With the second tool, IIP, this dose distribution is tailored to a specific patient anatomy by interactively changing the highest and lowest dose on the contours. Results: The combined use of EGO–IIP was evaluated on 24 prostate cancer patients, by having an inexperienced user create treatment plans, compliant to clinical dose objectives. This user was able to create dose plans of 24 patients in an average time of 4.4 min/patient. An experienced treatment planner without extensive training in EGO–IIP also created 24 plans. The resulting dose-volume histogram parameters were comparable to the clinical plans and showed high conformance to clinical standards. Conclusions: Even for an inexperienced user, treatment planning with EGO–IIP for stepping source prostate brachytherapy is feasible as an alternative to current optimization algorithms, offering speed, simplicity for the user, and local control of the dose levels.

  3. Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario

    NASA Astrophysics Data System (ADS)

    Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.

    2009-12-01

    Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.

  4. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  5. On optimal filtering of GPS dual frequency observations without using orbit information

    NASA Technical Reports Server (NTRS)

    Eueler, Hans-Juergen; Goad, Clyde C.

    1991-01-01

    The concept of optimal filtering of observations collected with a dual frequency GPS P-code receiver is investigated in comparison to an approach for C/A-code units. The filter presented here uses only data gathered between one receiver and one satellite. The estimated state vector consists of a one-way pseudorange, ionospheric influence, and ambiguity biases. Neither orbit information nor station information is required. The independently estimated biases are used to form double differences where, in case of a P-code receiver, the wide lane integer ambiguities are usually recovered successfully except when elevation angles are very small. An elevation dependent uncertainty for pseudorange measurements was discovered for different receiver types. An exponential model for the pseudorange uncertainty was used with success in the filter gain computations.

  6. Implicit application of polynomial filters in a k-step Arnoldi method

    NASA Technical Reports Server (NTRS)

    Sorensen, D. C.

    1990-01-01

    The Arnoldi process is a well known technique for approximating a few eigenvalues and corresponding eigenvectors of a general square matrix. Numerical difficulties such as loss of orthogonality and assessment of the numerical quality of the approximations as well as a potential for unbounded growth in storage have limited the applicability of the method. These issues are addressed by fixing the number of steps in the Arnoldi process at a prescribed value k and then treating the residual vector as a function of the initial Arnoldi vector. This starting vector is then updated through an iterative scheme that is designed to force convergence of the residual to zero. The iterative scheme is shown to be a truncation of the standard implicitly shifted QR-iteration for dense problems and it avoids the need to explicitly restart the Arnoldi sequence. The main emphasis of this paper is on the derivation and analysis of this scheme. However, there are obvious ways to exploit parallelism through the matrix-vector operations that comprise the majority of the work in the algorithm. Preliminary computational results are given for a few problems on some parallel and vector computers.

  7. Treatment of domestic sewage at low temperature in a two-anaerobic step system followed by a trickling filter.

    PubMed

    Elmitwalli, T A; van Lier, J; Zeeman, G; Lettinga, G

    2003-01-01

    The treatment of domestic sewage at low temperature was studied in a two-anaerobic-step system followed by an aerobic step, consisting of an anaerobic filter (AF) + an anaerobic hybrid (AH) + polyurethane-foam trickling filter (PTF). The AF+AH system was operated at a hydraulic retention time (HRT) of 3+6 h at a controlled temperature of 13 degrees C, while the PTF was operated without wastewater recirculation at different hydraulic loading rates (HLR) of 41, 15.4 and 2.6 m3/m2/d at ambient temperature (ca. 15-18 degrees C). The AF reactor removed the major part of the total and suspended COD, viz. 46 and 58% respectively. The AH reactor with granular sludge was efficient in the removal and conversion of the anaerobically biodegradable COD. The AF+AH system removed 63% of total COD and converted 46% of the influent total COD to methane. At a HLR of 41 m3/m2/d, the COD removal was limited in the PTF, while at HLR of 15.4 and 2.6 m3/m2/d, a high total COD removal of 54-57% was achieved without a significant difference between the two HLRs. The PTF was mainly efficient in the removal of particles (suspended and colloidal COD removal were 75-90% and 75-83% respectively), which were not removed in the anaerobic two-step. The overall total COD removal in the AF+AH+PTF system was 85%. Decreasing the HLR from 15.4 to 2.6 m3/m2/d, only increased the nitrification rate efficiency in the PTF from 22% to 60%. Also, at HLR of 15.4 and 2.6 m3/m2/d, PTF showed a similar removal for E. coli by about 2 log. Therefore, the effluent of AF+AH+PTF system can be utilised for restricted irrigation in order to close water and nutrients cycles. Moreover, such a system represents a high-load and a low-cost technology, which is a suitable solution for developing countries. PMID:14753537

  8. Optimized SU-8 UV-lithographical process for a Ka-band filter fabrication

    NASA Astrophysics Data System (ADS)

    Jin, Peng; Jiang, Kyle; Tan, Jiubin; Lancaster, M. J.

    2005-04-01

    Rapidly expanding of millimeter wave communication has made Ka-band filter fabrication to gain more and more attention from the researcher. Described in this paper is a high quality UV-lithographic process for making high aspect ratio parts of a coaxial Ka band dual mode filter using an ultra-thick SU-8 photoresist layer, which has a potential application in LMDS systems. Due to the strict requirements on the perpendicular geometry of the filter parts, the microfabrication research work has been concentrated on modifying the SU-8 UV-lithographical process to improve the vertical angle of sidewalls and high aspect ratio. Based on the study of the photoactive property of ultra-thick SU-8 layers, an optimized prebake time has been found for obtaining the minimum UV absorption by SU-8. The optimization principle has been tested using a series of experiments of UV-lithography on different prebake times, and proved effective. An optimized SU-8 UV-lithographical process has been developed for the fabrication of thick layer filter structures. During the test fabrication, microstructures with aspect ratio as high as 40 have been produced in 1000 mm ultra-thick SU-8 layers using the standard UV-lithography equipment. The sidewall angles are controlled between 85~90 degrees. The high quality SU-8 structures will then be used as positive moulds for producing copper structures using electroforming process. The microfabication process presented in this paper suits the proposed filter well. It also reveals a good potential for volume production of high quality RF devices.

  9. Optimized particle-mesh Ewald/multiple-time step integration for molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Batcho, Paul F.; Case, David A.; Schlick, Tamar

    2001-09-01

    We develop an efficient multiple time step (MTS) force splitting scheme for biological applications in the AMBER program in the context of the particle-mesh Ewald (PME) algorithm. Our method applies a symmetric Trotter factorization of the Liouville operator based on the position-Verlet scheme to Newtonian and Langevin dynamics. Following a brief review of the MTS and PME algorithms, we discuss performance speedup and the force balancing involved to maximize accuracy, maintain long-time stability, and accelerate computational times. Compared to prior MTS efforts in the context of the AMBER program, advances are possible by optimizing PME parameters for MTS applications and by using the position-Verlet, rather than velocity-Verlet, scheme for the inner loop. Moreover, ideas from the Langevin/MTS algorithm LN are applied to Newtonian formulations here. The algorithm's performance is optimized and tested on water, solvated DNA, and solvated protein systems. We find CPU speedup ratios of over 3 for Newtonian formulations when compared to a 1 fs single-step Verlet algorithm using outer time steps of 6 fs in a three-class splitting scheme; accurate conservation of energies is demonstrated over simulations of length several hundred ps. With modest Langevin forces, we obtain stable trajectories for outer time steps up to 12 fs and corresponding speedup ratios approaching 5. We end by suggesting that modified Ewald formulations, using tailored alternatives to the Gaussian screening functions for the Coulombic terms, may allow larger time steps and thus further speedups for both Newtonian and Langevin protocols; such developments are reported separately.

  10. A Two-Step Double Filter Method to Extract Open Water Surfaces from Landsat ETM+ Imagery

    NASA Astrophysics Data System (ADS)

    Wang, Haijing; Kinzelbach, Wolfgang

    2010-05-01

    In arid and semi-arid areas, lakes and temporal ponds play a significant role in agriculture and livelihood of local communities as well as in ecology. Monitoring the changes of these open water bodies allows to draw conclusions on water use as well as climatic impacts and can assist in the formulation of a sustainable resource management strategy. The simultaneous monitoring of larger numbers of water bodies with respect to their stage and area is feasible with the aid of remote sensing. Here the monitoring of lake surface areas is discussed. Landsat TM and ETM+ images provide a medium resolution of 30m, and offer an easily available data source to monitor the long term changes of water surfaces in arid and semi-arid regions. In the past great effort was put into developing simple indices to extract water surfaces from satellite images. However, there is a common problem in achieving accurate results with these indices: How to select a threshold value for water pixels without introducing excessive subjective judgment. The threshold value would also have to vary with location, land features and seasons, allowing for inherent uncertainty. A new method was developed using Landsat ETM+ imaginary (30 meter resolution) to extract open water surfaces. This method uses the Normalized Difference of Vegetation Index (NDVI) as the basis for an objective way of selecting threshold values of Modified Normalized Difference of Water Index (MNDWI) and Stress Degree Days (SDD), which were used as a combined filter to extract open water surfaces. We choose two study areas to verify the method. One study area is in Northeast China, where bigger lakes, smaller muddy ponds and wetlands are interspersed with agricultural land and salt crusts. The other one is Kafue Flats in Zambia, where seasonal floods of the Zambezi River create seasonal wetlands in addition to the more permanent water ponds and river channels. For both sites digital globe images of 0.5 meter resolution are available, which were taken within a few days of Landsat passing dates and which will serve here as ground truth information. On their basis the new method was compared to other available methods for extracting water pixels. Compared to the other methods, the new method can extract water surface not only from deep lakes/reservoirs and wetlands but also from small mud ponds in alkali flats and irrigation ponds in the fields. For the big and deep lakes, the extracted boundary of the lakes fits accurately the observed boundary. Five test sites in the study area in Northeast China with only shallow water surfaces were chosen and tested. The extracted water surfaces were compared with each site's digital globe maps, respectively to determine the accuracy of the method. The comparison shows that the method could extract all completely wet pixels (water area covering 100% of the pixel area) in all test sites. For partially wet pixels (50-100% of pixel area), the model can detect 91% of all pixels. No dry pixels were mistaken by the model as water pixels. Keywords: Remote sensing, Landsat ETM+ imaginary, Water Surface, NDVI, MNDWI, and SDD

  11. Optimal design of a bank of spatio-temporal filters for EEG signal classification.

    PubMed

    Higashi, Hiroshi; Tanaka, Toshihisa

    2011-01-01

    The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery. PMID:22255731

  12. Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1992-01-01

    Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

  13. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.

  14. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.

  15. Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy.

    PubMed

    Wang, Ke; Qiu, Ping

    2015-05-01

    Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration. PMID:25950644

  16. Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Qiu, Ping

    2015-05-01

    Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration.

  17. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (?p), stopband error (?s), transition band error (?t), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647

  18. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    PubMed

    Suleimanov, Yury V; Green, William H

    2015-09-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms-the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes. PMID:26575920

  19. A one-step screening process for optimal alignment of (soft) colloidal particles

    NASA Astrophysics Data System (ADS)

    Hiltl, Stephanie; Oltmanns, Jens; Böker, Alexander

    2012-11-01

    We developed nanostructured gradient wrinkle surfaces to establish a one-step screening process towards optimal assembly of soft and hard colloidal particles (microgel systems and silica particles). Thereby, we simplify studies on the influence of wrinkle dimensions (wavelength, amplitude) on particle properties and their alignment. In a combinatorial experiment, we optimize particle assembly regarding the ratio of particle diameter vs. wrinkle wavelength and packing density and point out differences between soft and hard particles. The preparation of wrinkle gradients in oxidized top layers on elastic poly(dimethylsiloxane) (PDMS) substrates is based on a controlled wrinkling approach. Partial shielding of the substrate during plasma oxidation is crucial to obtain two-dimensional gradients with amplitudes ranging from 7 to 230 nm and wavelengths between 250 and 900 nm.We developed nanostructured gradient wrinkle surfaces to establish a one-step screening process towards optimal assembly of soft and hard colloidal particles (microgel systems and silica particles). Thereby, we simplify studies on the influence of wrinkle dimensions (wavelength, amplitude) on particle properties and their alignment. In a combinatorial experiment, we optimize particle assembly regarding the ratio of particle diameter vs. wrinkle wavelength and packing density and point out differences between soft and hard particles. The preparation of wrinkle gradients in oxidized top layers on elastic poly(dimethylsiloxane) (PDMS) substrates is based on a controlled wrinkling approach. Partial shielding of the substrate during plasma oxidation is crucial to obtain two-dimensional gradients with amplitudes ranging from 7 to 230 nm and wavelengths between 250 and 900 nm. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr32710d

  20. Optimizing performance of ceramic pot filters in Northern Ghana and modeling flow through paraboloid-shaped filters/

    E-print Network

    Miller, Travis Reed

    2010-01-01

    This work aimed to inform the design of ceramic pot filters to be manufactured by the organization Pure Home Water (PHW) in Northern Ghana, and to model the flow through an innovative paraboloid-shaped ceramic pot filter. ...

  1. Development of a reliable alkaline wastewater treatment process: optimization of the pre-treatment step.

    PubMed

    Prisciandaro, M; Mazziotti di Celso, G; Vegliò, F

    2005-12-01

    Alkaline waters produced by caprolactam plants polymerizing the fibres of nylon-6 are characterized by a very high alkalinity, salinity and COD values, in addition to the presence of recalcitrant organic molecules. These characteristics make alkaline wastewaters very difficult to treat; so the development of the suitable sequence to carry out in a depuration process appears of great interest. The proposed general process consists of three main steps: first, pre-treatment for the acidification of the polluted stream, second, a successive extraction of the bio-recalcitrant compound (noted as cycloexanecarboxysulphonic acid (CECS)) and a final biological treatment. In particular, this paper deals with the pre-treatment step: it consists of an acidification process by means of sulphuric acid with the concomitant precipitation of black slurries in the presence of different substances, such as solvents, CaCl2, bentonite, several flocculants and coagulants. The aim of this study is to set an experimental procedure, which could minimize fouling problems during sludge filtration. The use of additives like bentonite seems to give the best results, because it allows good COD reductions and a filterable precipitate, which avoids excessive fouling problems of the experimental apparatus. PMID:16293280

  2. Optimal hydrograph separation filter to evaluate transport routines of hydrological models

    NASA Astrophysics Data System (ADS)

    Rimmer, Alon; Hartmann, Andreas

    2014-05-01

    Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is an attempt to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to develop a benchmark model that can be used as a geochemical model itself, or to test the performance of process based hydro-geochemical models. The benchmark model quantifies the degree of knowledge that the stream flow time series itself contributes to the hydrochemical analysis. Results of the OHS show that the two HS fractions ("rapid" and "slow") differ according to the geochemical substances which were selected. The OHS parameters were then used to demonstrate how to develop benchmark model for hydro-chemical predictions. Finally, predictions of solute transport from a process-based hydrological model were compared to the proposed benchmark model. Our results indicate that the benchmark model illustrated and quantified the contribution of the modeling procedure better than only using traditional measures like r2 or the Nash-Sutcliffe efficiency.

  3. Optimization of a preparative multimodal ion exchange step for purification of a potential malaria vaccine.

    PubMed

    Paul, Jessica; Jensen, Sonja; Dukart, Arthur; Cornelissen, Gesine

    2014-10-31

    In 2000 the implementation of quality by design (QbD) was introduced by the Food and Drug Administration (FDA) and described in the ICH Q8, Q9 and Q10 guidelines. Since that time, systematic optimization strategies for purification of biopharmaceuticals have gained a more important role in industrial process development. In this investigation, the optimization strategy was carried out by adopting design of experiments (DoE) in small scale experiments. A combination method comprising a desalting and a multimodal ion exchange step was used for the experimental runs via the chromatographic system ÄKTA™ avant. The multimodal resin Capto™ adhere was investigated as an alternative to conventional ion exchange and hydrophobic interaction resins for the intermediate purification of the potential malaria vaccine D1M1. The ligands, used in multimodal chromatography, interact with the target molecule in different ways. The multimodal functionality includes the binding of proteins in spite of the ionic strength of the loading material. The target protein binds at specific salt conditions and can be eluted by a step gradient decreasing the pH value and reducing the ionic strength. It is possible to achieve a maximized purity and recovery of the product because degradation products and other contaminants do not bind at specific salt concentrations at which the product still binds to the ligands. PMID:25271026

  4. A split-step particle swarm optimization algorithm in river stage forecasting

    NASA Astrophysics Data System (ADS)

    Chau, K. W.

    2007-11-01

    SummaryAn accurate forecast of river stage is very significant so that there is ample time for the pertinent authority to issue a forewarning of the impending flood and to implement early evacuation measures as required. Since a variety of existing process-based hydrological models involve exogenous input and different assumptions, artificial neural networks have the potential to be a cost-effective solution. In this paper, a split-step particle swarm optimization (PSO) model is developed and applied to train multi-layer perceptrons for forecasting real-time water levels at Fo Tan in Shing Mun River of Hong Kong with different lead times on the basis of the upstream gauging station (Tin Sum) or at Fo Tan. This paradigm is able to combine the advantages of global search capability of PSO algorithm in the first step and local fast convergence of Levenberg-Marquardt algorithm in the second step. The results demonstrate that it is able to attain a higher accuracy in a much shorter time when compared with the benchmarking backward propagation algorithm as well as the standard PSO algorithm.

  5. Reducing nonlinear waveform distortion in IM/DD systems by optimized receiver filtering

    NASA Astrophysics Data System (ADS)

    Zhou, Y. R.; Watkins, L. R.

    1994-09-01

    Nonlinear waveform distortion caused by the combined effect of fiber chromatic dispersion, self-phase modulation, and amplifier noise limits the attainable performance of high bit-rate, long haul optically repeatered systems. Signal processing in the receiver is investigated and found to be effective in reducing the penalty caused by this distortion. Third order low pass filters, with and without a tapped delay line equalizer are considered. The pole locations or the tap weights are optimized with respect to a minimum bit error rate criterion which accommodates distortion, pattern effects, decision time, threshold setting and noise contributions. The combination of a third order Butterworth filter and a five-tap, fractionally spaced equalizer offers more than 4 dB benefit at 4000 km compared with conventional signal processing designs.

  6. Ultra-Compact Broadband High-Spurious Suppression Bandpass Filter Using Double Split-end Stepped Impedance Resonators

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop; Wollack, Ed; Papapolymerou, John; Laskar, Joy

    2005-01-01

    We propose an ultra compact single-layer spurious suppression band pass filter design which has the following benefit: 1) Effective coupling area can be increased with no fabrication limitation and no effect on the spurious response; 2) Two fundamental poles are introduced to suppress spurs; 3) Filter can be designed with up to 30% bandwidth; 4) The Filter length is reduced by at least 100% when compared to the conventional filter; 5) Spurious modes are suppressed up to at the seven times the fundamental frequency; and 6) It uses only one layer of metallization which minimize the fabrication cost.

  7. Mitochondrial Swelling Measurement In Situ by Optimized Spatial Filtering: Astrocyte-Neuron Differences

    PubMed Central

    Gerencser, Akos A.; Doczi, Judit; Töröcsik, Beata; Bossy-Wetzel, Ella; Adam-Vizi, Vera

    2008-01-01

    Mitochondrial swelling is a hallmark of mitochondrial dysfunction, and is an indicator of the opening of the mitochondrial permeability transition pore. We introduce here a novel quantitative in situ single-cell assay of mitochondrial swelling based on standard wide-field or confocal fluorescence microscopy. This morphometric technique quantifies the relative diameter of mitochondria labeled by targeted fluorescent proteins. Fluorescence micrographs are spatial bandpass filtered transmitting either high or low spatial frequencies. Mitochondrial swelling is measured by the fluorescence intensity ratio of the high- to low-frequency filtered copy of the same image. We have termed this fraction the “thinness ratio”. The filters are designed by numeric optimization for sensitivity. We characterized the thinness ratio technique by modeling microscopic image formation and by experimentation in cultured cortical neurons and astrocytes. The frequency domain image processing endows robustness and subresolution sensitivity to the thinness ratio technique, overcoming the limitations of shape measurement approaches. The thinness ratio proved to be highly sensitive to mitochondrial swelling, but insensitive to fission or fusion of mitochondria. We found that in situ astrocytic mitochondria swell upon short-term uncoupling or inhibition of oxidative phosphorylation, whereas such responses are absent in cultured cortical neurons. PMID:18424491

  8. ARTcrystal process for industrial nanocrystal production--optimization of the ART MICCRA pre-milling step.

    PubMed

    Scholz, Patrik; Arntjen, Anja; Müller, Rainer H; Keck, Cornelia M

    2014-04-25

    The ARTcrystal process is a new approach for the production of drug nanocrystals. It is a combination of a special pre-treatment step with subsequent high pressure homogenization (HPH) at low pressures. In the pre-treatment step the particle size is already reduced to the nanometer range by use of the newly developed ART MICCRA rotor-stator system. In this study, the running parameters for the ART MICCRA system are systematically studied, i.e. temperature, stirring speed, flow rate, foaming effects, size of starting material, valve position from 0° to 45°. The antioxidant rutin was used as model drug. Applying optimized parameters, the pre-milling yielded already a nanosuspension with a photon correlation spectroscopy (PCS) diameter of about 650 nm. On lab scale production time was 5 min for 1L nanosuspension (5% rutin content), i.e. the capacity of the setup is also suitable for medium industrial scale production. Compared to other nanocrystal production methods (bead milling, HPH, etc.), similar sizes are achievable, but the process is more cost-effective, faster in time and easily scale-able, thus being an interesting novel process for nanocrystal production on lab and industrial scale. PMID:24556175

  9. Towards Optimal Filtering on ARM for ATLAS Tile Calorimeter Front-End Processing

    NASA Astrophysics Data System (ADS)

    Cox, Mitchell A.

    2015-10-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. Advanced and characteristically expensive Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are currently used to process this quantity of data. It is proposed that a cost- effective, high data throughput Processing Unit (PU) can be developed by using several ARM System on Chips in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. ARM is a cost effective and energy efficient alternative CPU architecture to the long established x86 architecture. This PU could be used for a variety of high-level algorithms on the high data throughput raw data. An Optimal Filtering algorithm has been implemented in C++ and several ARM platforms have been tested. Optimal Filtering is currently used in the ATLAS Tile Calorimeter front-end for basic energy reconstruction and is currently implemented on DSPs.

  10. One step memory of group reputation is optimal to promote cooperation in public goods games

    NASA Astrophysics Data System (ADS)

    Li, Aming; Wu, Te; Cong, Rui; Wang, Long

    2013-08-01

    Individuals' change of social ties has been observed to promote cooperation under specific mechanism, such as success-driven or expectation-driven migration. However, there is no clear criterion or information from players' instinctive memory or experience for them to consult as they would like to change their social ties. For the first time we define the reputation of a group based on individual's memory law. A model is proposed, where all players are endowed with the capacity to adjust interaction ambience involved if the reputation of their environment fails to satisfy their expectations. Simulation results show that cooperation decays as the increase of player's memory depth and one step memory is optimal to promote cooperation, which provides a potential interpretation for that most species memorize their reciprocators over very short time scales. Of intrigue is the result that cooperation can be improved greatly at an optimal interval of moderate expectation. Moreover, cooperation can be established and stabilized within a wide range of model parameters even when players choose their new partners randomly under the combination of reputation and group switching mechanisms. Our work validates the fact that individuals' short memory or experience within a multi-players group acts as an effective ingredient to boost cooperation.

  11. Analysis of the rate-limiting step of an anaerobic biotrickling filter removing Sudeep C. Popat a

    E-print Network

    of gaseous pollutants in biotrickling filters involves a series of complex physico-chemical and biological of biotrickling filters is simple. Pollutant-degrading microorganisms are attached to an inert packing material or support and convert pollutants to benign products. An aqueous phase is continuously or intermittently

  12. Optimal hydrograph separation filter to evaluate transport routines of hydrological models

    NASA Astrophysics Data System (ADS)

    Rimmer, Alon; Hartmann, Andreas

    2014-06-01

    Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to benchmark the performance of process-based hydro-geochemical (HG) models. The new HG routine can be used to quantify the degree of knowledge that the stream flow time series itself contributes to the HG analysis, using newly developed benchmark geochemistry efficiency (BGE). Results of the OHS show that the two HS fractions (“rapid” and “slow”) differ according to the HG substances which were selected. The BFImax parameter (long-term ratio of baseflow to total streamflow) ranged from 0.26 to 0.94 for SO4-2 and total suspended solids, TSS, respectively. Then, predictions of SO4-2 transport from a process-based hydrological model were benchmarked with the proposed HG routine, in order to evaluate the significance of the HG routines in the process-based model. This comparison provides valuable quality test that would not be obvious when using the traditional measures like r2 or the NSE (Nash-Sutcliffe efficiency). The process-based model resulted in r2 = 0.65 and NSE = 0.65, while the benchmark routine results were slightly lower with r2 = 0.61 and NSE = 0.58. However, the comparison between the two model resulted in obvious advantage for the process-based model with BGE = 0.15.

  13. Real-time defect detection of steel wire rods using wavelet filters optimized by univariate dynamic encoding algorithm for searches.

    PubMed

    Yun, Jong Pil; Jeon, Yong-Ju; Choi, Doo-chul; Kim, Sang Woo

    2012-05-01

    We propose a new defect detection algorithm for scale-covered steel wire rods. The algorithm incorporates an adaptive wavelet filter that is designed on the basis of lattice parameterization of orthogonal wavelet bases. This approach offers the opportunity to design orthogonal wavelet filters via optimization methods. To improve the performance and the flexibility of wavelet design, we propose the use of the undecimated discrete wavelet transform, and separate design of column and row wavelet filters but with a common cost function. The coefficients of the wavelet filters are optimized by the so-called univariate dynamic encoding algorithm for searches (uDEAS), which searches the minimum value of a cost function designed to maximize the energy difference between defects and background noise. Moreover, for improved detection accuracy, we propose an enhanced double-threshold method. Experimental results for steel wire rod surface images obtained from actual steel production lines show that the proposed algorithm is effective. PMID:22561939

  14. Geometric optimization of a step bearing for a hydrodynamically levitated centrifugal blood pump for the reduction of hemolysis.

    PubMed

    Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2013-09-01

    A hydrodynamically levitated centrifugal blood pump with a semi-open impeller has been developed for mechanical circulatory assistance. However, a narrow bearing gap has the potential to cause hemolysis. The purpose of the present study is to optimize the geometric configuration of the hydrodynamic step bearing in order to reduce hemolysis by expansion of the bearing gap. First, a numerical analysis of the step bearing, based on lubrication theory, was performed to determine the optimal design. Second, in order to assess the accuracy of the numerical analysis, the hydrodynamic forces calculated in the numerical analysis were compared with those obtained in an actual measurement test using impellers having step lengths of 0%, 33%, and 67% of the vane length. Finally, a bearing gap measurement test and a hemolysis test were performed. As a result, the numerical analysis revealed that the hydrodynamic force was the largest when the step length was approximately 70%. The hydrodynamic force calculated in the numerical analysis was approximately equivalent to that obtained in the measurement test. In the measurement test and the hemolysis test, the blood pump having a step length of 67% achieved the maximum bearing gap and reduced hemolysis, as compared with the pumps having step lengths of 0% and 33%. It was confirmed that the numerical analysis of the step bearing was effective, and the developed blood pump having a step length of approximately 70% was found to be a suitable configuration for the reduction of hemolysis. PMID:23834855

  15. Optimization of hydrolysis and volatile fatty acids production from sugarcane filter cake: Effects of urea supplementation and sodium hydroxide pretreatment.

    PubMed

    Janke, Leandro; Leite, Athaydes; Batista, Karla; Weinrich, Sören; Sträuber, Heike; Nikolausz, Marcell; Nelles, Michael; Stinner, Walter

    2016-01-01

    Different methods for optimization the anaerobic digestion (AD) of sugarcane filter cake (FC) with a special focus on volatile fatty acids (VFA) production were studied. Sodium hydroxide (NaOH) pretreatment at different concentrations was investigated in batch experiments and the cumulative methane yields fitted to a dual-pool two-step model to provide an initial assessment on AD. The effects of nitrogen supplementation in form of urea and NaOH pretreatment for improved VFA production were evaluated in a semi-continuously operated reactor as well. The results indicated that higher NaOH concentrations during pretreatment accelerated the AD process and increased methane production in batch experiments. Nitrogen supplementation resulted in a VFA loss due to methane formation by buffering the pH value at nearly neutral conditions (?6.7). However, the alkaline pretreatment with 6g NaOH/100g FCFM improved both the COD solubilization and the VFA yield by 37%, mainly consisted by n-butyric and acetic acids. PMID:26278994

  16. Optimal real-time Q-ball imaging using regularized Kalman filtering with incremental orientation sets.

    PubMed

    Deriche, Rachid; Calder, Jeff; Descoteaux, Maxime

    2009-08-01

    Diffusion MRI has become an established research tool for the investigation of tissue structure and orientation. Since its inception, Diffusion MRI has expanded considerably to include a number of variations such as diffusion tensor imaging (DTI), diffusion spectrum imaging (DSI) and Q-ball imaging (QBI). The acquisition and analysis of such data is very challenging due to its complexity. Recently, an exciting new Kalman filtering framework has been proposed for DTI and QBI reconstructions in real-time during the repetition time (TR) of the acquisition sequence. In this article, we first revisit and thoroughly analyze this approach and show it is actually sub-optimal and not recursively minimizing the intended criterion due to the Laplace-Beltrami regularization term. Then, we propose a new approach that implements the QBI reconstruction algorithm in real-time using a fast and robust Laplace-Beltrami regularization without sacrificing the optimality of the Kalman filter. We demonstrate that our method solves the correct minimization problem at each iteration and recursively provides the optimal QBI solution. We validate with real QBI data that our proposed real-time method is equivalent in terms of QBI estimation accuracy to the standard offline processing techniques and outperforms the existing solution. Last, we propose a fast algorithm to recursively compute gradient orientation sets whose partial subsets are almost uniform and show that it can also be applied to the problem of efficiently ordering an existing point-set of any size. This work enables a clinician to start an acquisition with just the minimum number of gradient directions and an initial estimate of the orientation distribution functions (ODF) and then the next gradient directions and ODF estimates can be recursively and optimally determined, allowing the acquisition to be stopped as soon as desired or at any iteration with the optimal ODF estimates. This opens new and interesting opportunities for real-time feedback for clinicians during an acquisition and also for researchers investigating into optimal diffusion orientation sets and real-time fiber tracking and connectivity mapping. PMID:19586794

  17. Rod-filter-field optimization of the J-PARC RF-driven H- ion source

    NASA Astrophysics Data System (ADS)

    Ueno, A.; Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2015-04-01

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H- ion beam of 60mA within normalized emittances of 1.5?mm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500?s×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H- ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H- ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM's gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H- ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM's cross-section (magnetmotive force) was indispensable for easy operation with the temperature (TPE) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM's cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for TPE around 60°C.

  18. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  19. Reliably Detecting Clinically Important Variants Requires Both Combined Variant Calls and Optimized Filtering Strategies

    PubMed Central

    Field, Matthew A.; Cho, Vicky

    2015-01-01

    A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality ‘genome in a bottle’ reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436

  20. An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter

    NASA Astrophysics Data System (ADS)

    Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning

    2015-08-01

    An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/??.

  1. Optimal design of bandpass filters to reduce emission from photovoltaic cells under monochromatic illumination

    NASA Astrophysics Data System (ADS)

    Takeda, Yasuhiko; Iizuka, Hideo; Ito, Tadashi; Mizuno, Shintaro; Hasegawa, Kazuo; Ichikawa, Tadashi; Ito, Hiroshi; Kajino, Tsutomu; Higuchi, Kazuo; Ichiki, Akihisa; Motohiro, Tomoyoshi

    2015-08-01

    We have theoretically investigated photovoltaic cells used under the illumination condition of monochromatic light incident from a particular direction, which is very different from that for solar cells under natural sunlight, using detailed balance modeling. A multilayer bandpass filter formed on the surface of the cell has been found to trap the light generated by radiative recombination inside the cell, reduce emission from the cell, and consequently improve conversion efficiency. The light trapping mechanism is interpreted in terms of a one-dimensional photonic crystal, and the design guide to optimize the multilayer structure has been clarified. For obliquely incident illumination, as well as normal incidence, a significant light trapping effect has been achieved, although the emission patterns are extremely different from each other depending on the incident directions.

  2. Optimization of a Multi-Step Procedure for Isolation of Chicken Bone Collagen

    PubMed Central

    2015-01-01

    Chicken bone is not adequately utilized despite its high nutritional value and protein content. Although not a common raw material, chicken bone can be used in many different ways besides manufacturing of collagen products. In this study, a multi-step procedure was optimized to isolate chicken bone collagen for higher yield and quality for manufacture of collagen products. The chemical composition of chicken bone was 2.9% nitrogen corresponding to about 15.6% protein, 9.5% fat, 14.7% mineral and 57.5% moisture. The lowest amount of protein loss was aimed along with the separation of the highest amount of visible impurities, non-collagen proteins, minerals and fats. Treatments under optimum conditions removed 57.1% of fats and 87.5% of minerals with respect to their initial concentrations. Meanwhile, 18.6% of protein and 14.9% of hydroxyproline were lost, suggesting that a selective separation of non-collagen components and isolation of collagen were achieved. A significant part of impurities were selectively removed and over 80% of the original collagen was preserved during the treatments.

  3. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS--II: EXPRESS BRIEFS, VOL. 51, NO. 3, MARCH 2004 105 Continuous-Time Filter Design Optimized for

    E-print Network

    Moon, Un-Ku

    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS--II: EXPRESS BRIEFS, VOL. 51, NO. 3, MARCH 2004 105 Continuous-Time Filter Design Optimized for Reduced Die Area Charles Myers, Student Member, IEEE, Brandon for distributing capacitor and resistor area to optimally reduce die area in a given continuous-time filter design

  4. An Explicit Linear Filtering Solution for the Optimization of Guidance Systems with Statistical Inputs

    NASA Technical Reports Server (NTRS)

    Stewart, Elwood C.

    1961-01-01

    The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.

  5. Optimization of synthesis and peptization steps to obtain iron oxide nanoparticles with high energy dissipation rates

    NASA Astrophysics Data System (ADS)

    Mérida, Fernando; Chiu-Lam, Andreina; Bohórquez, Ana C.; Maldonado-Camargo, Lorena; Pérez, María-Eglée; Pericchi, Luis; Torres-Lugo, Madeline; Rinaldi, Carlos

    2015-11-01

    Magnetic Fluid Hyperthermia (MFH) uses heat generated by magnetic nanoparticles exposed to alternating magnetic fields to cause a temperature increase in tumors to the hyperthermia range (43-47 °C), inducing apoptotic cancer cell death. As with all cancer nanomedicines, one of the most significant challenges with MFH is achieving high nanoparticle accumulation at the tumor site. This motivates development of synthesis strategies that maximize the rate of energy dissipation of iron oxide magnetic nanoparticles, preferable due to their intrinsic biocompatibility. This has led to development of synthesis strategies that, although attractive from the point of view of chemical elegance, may not be suitable for scale-up to quantities necessary for clinical use. On the other hand, to date the aqueous co-precipitation synthesis, which readily yields gram quantities of nanoparticles, has only been reported to yield sufficiently high specific absorption rates after laborious size selective fractionation. This work focuses on improvements to the aqueous co-precipitation of iron oxide nanoparticles to increase the specific absorption rate (SAR), by optimizing synthesis conditions and the subsequent peptization step. Heating efficiencies up to 1048 W/gFe (36.5 kA/m, 341 kHz; ILP=2.3 nH m2 kg-1) were obtained, which represent one of the highest values reported for iron oxide particles synthesized by co-precipitation without size-selective fractionation. Furthermore, particles reached SAR values of up to 719 W/gFe (36.5 kA/m, 341 kHz; ILP=1.6 nH m2 kg-1) when in a solid matrix, demonstrating they were capable of significant rates of energy dissipation even when restricted from physical rotation. Reduction in energy dissipation rate due to immobilization has been identified as an obstacle to clinical translation of MFH. Hence, particles obtained with the conditions reported here have great potential for application in nanoscale thermal cancer therapy.

  6. Optimization by decomposition: A step from hierarchic to non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    A new, non-hierarchic decomposition is formulated for system optimization that uses system analysis, system sensitivity analysis, temporary decoupled optimizations performed in the design subspaces corresponding to the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems. The approach amounts to a variation of the well-known method of subspace optimization modified so that the analysis of the entire system is eliminated from the subspace optimization and the subspace optimizations may be performed concurrently.

  7. Optimization of leaf margins for lung stereotactic body radiotherapy using a flattening filter-free beam

    SciTech Connect

    Wakai, Nobuhide; Sumida, Iori; Otani, Yuki; Suzuki, Osamu; Seo, Yuji; Isohashi, Fumiaki; Yoshioka, Yasuo; Ogawa, Kazuhiko; Hasegawa, Masatoshi

    2015-05-15

    Purpose: The authors sought to determine the optimal collimator leaf margins which minimize normal tissue dose while achieving high conformity and to evaluate differences between the use of a flattening filter-free (FFF) beam and a flattening-filtered (FF) beam. Methods: Sixteen lung cancer patients scheduled for stereotactic body radiotherapy underwent treatment planning for a 7 MV FFF and a 6 MV FF beams to the planning target volume (PTV) with a range of leaf margins (?3 to 3 mm). Forty grays per four fractions were prescribed as a PTV D95. For PTV, the heterogeneity index (HI), conformity index, modified gradient index (GI), defined as the 50% isodose volume divided by target volume, maximum dose (Dmax), and mean dose (Dmean) were calculated. Mean lung dose (MLD), V20 Gy, and V5 Gy for the lung (defined as the volumes of lung receiving at least 20 and 5 Gy), mean heart dose, and Dmax to the spinal cord were measured as doses to organs at risk (OARs). Paired t-tests were used for statistical analysis. Results: HI was inversely related to changes in leaf margin. Conformity index and modified GI initially decreased as leaf margin width increased. After reaching a minimum, the two values then increased as leaf margin increased (“V” shape). The optimal leaf margins for conformity index and modified GI were ?1.1 ± 0.3 mm (mean ± 1 SD) and ?0.2 ± 0.9 mm, respectively, for 7 MV FFF compared to ?1.0 ± 0.4 and ?0.3 ± 0.9 mm, respectively, for 6 MV FF. Dmax and Dmean for 7 MV FFF were higher than those for 6 MV FF by 3.6% and 1.7%, respectively. There was a positive correlation between the ratios of HI, Dmax, and Dmean for 7 MV FFF to those for 6 MV FF and PTV size (R = 0.767, 0.809, and 0.643, respectively). The differences in MLD, V20 Gy, and V5 Gy for lung between FFF and FF beams were negligible. The optimal leaf margins for MLD, V20 Gy, and V5 Gy for lung were ?0.9 ± 0.6, ?1.1 ± 0.8, and ?2.1 ± 1.2 mm, respectively, for 7 MV FFF compared to ?0.9 ± 0.6, ?1.1 ± 0.8, and ?2.2 ± 1.3 mm, respectively, for 6 MV FF. With the heart inside the radiation field, the mean heart dose showed a V-shaped relationship with leaf margins. The optimal leaf margins were ?1.0 ± 0.6 mm for both beams. Dmax to the spinal cord showed no clear trend for changes in leaf margin. Conclusions: The differences in doses to OARs between FFF and FF beams were negligible. Conformity index, modified GI, MLD, lung V20 Gy, lung V5 Gy, and mean heart dose showed a V-shaped relationship with leaf margins. There were no significant differences in optimal leaf margins to minimize these parameters between both FFF and FF beams. The authors’ results suggest that a leaf margin of ?1 mm achieves high conformity and minimizes doses to OARs for both FFF and FF beams.

  8. The Touro 12-Step: A Systematic Guide to Optimizing Survey Research with Online Discussion Boards

    PubMed Central

    Ip, Eric J; Tenerowicz, Michael J; Perry, Paul J

    2010-01-01

    The Internet, in particular discussion boards, can provide a unique opportunity for recruiting participants in online research surveys. Despite its outreach potential, there are significant barriers which can limit its success. Trust, participation, and visibility issues can all hinder the recruitment process; the Touro 12-Step was developed to address these potential hurdles. By following this step-by-step approach, researchers will be able to minimize these pitfalls and maximize their recruitment potential via online discussion boards. PMID:20507843

  9. Improvement of hemocompatibility for hydrodynamic levitation centrifugal pump by optimizing step bearings.

    PubMed

    Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2011-01-01

    We have developed a hydrodynamic levitation centrifugal blood pump with a semi-open impeller for a mechanically circulatory assist. The impeller levitated with original hydrodynamic bearings without any complicated control and sensors. However, narrow bearing gap has the potential for causing hemolysis. The purpose of the study is to investigate the geometric configuration of the hydrodynamic step bearing to minimize hemolysis by expansion of the bearing gap. Firstly, we performed the numerical analysis of the step bearing based on Reynolds equation, and measured the actual hydrodynamic force of the step bearing. Secondly, the bearing gap measurement test and the hemolysis test were performed to the blood pumps, whose step length were 0 %, 33 % and 67 % of the vane length respectively. As a result, in the numerical analysis, the hydrodynamic force was the largest, when the step bearing was around 70 %. In the actual evaluation tests, the blood pump having step 67 % obtained the maximum bearing gap, and was able to improve the hemolysis, compared to those having step 0% and 33%. We confirmed that the numerical analysis of the step bearing worked effectively, and the blood pump having step 67 % was suitable configuration to minimize hemolysis, because it realized the largest bearing gap. PMID:22254562

  10. Optimization by decomposition: A step from hierarchic to non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    A new, non-hierarchic decomposition is formulated for system optimization that uses system analysis, system sensitivity analysis, temporary decoupled optimizations performed in the design subspaces corresponding to the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems. The approach amounts to a variation of the well-known method of subspace optimization modified so that the analysis of the entire system is eliminated from the subspace optimization and the subspace optimizations may be performed concurrently.

  11. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition. PMID:26681183

  12. Optimal Nonnegative Color Scanning Filters Gaurav Sharma \\Lambda H. Joel Trussell y Michael J. Vrhel z

    E-print Network

    Sharma, Gaurav

    . Vrhel z Abstract In this correspondence, the problem of designing color scanning filters for multi­illuminants can be posed as a color­filter design problem. Reference [1] described a method of computing transmittances of filters that minimized the minimum­mean­squared tristimulus error. The design of color

  13. Matched filter optimization of kSZ measurements with a reconstructed cosmological flow field

    NASA Astrophysics Data System (ADS)

    Li, Ming; Angulo, R. E.; White, S. D. M.; Jasche, J.

    2014-09-01

    We develop and test a new statistical method to measure the kinematic Sunyaev-Zel'dovich (kSZ) effect. A sample of independently detected clusters is combined with the cosmic flow field predicted from a galaxy redshift survey in order to derive a matched filter that optimally weights the kSZ signal for the sample as a whole given the noise involved in the problem. We apply this formalism to realistic mock microwave skies based on cosmological N-body simulations, and demonstrate its robustness and performance. In particular, we carefully assess the various sources of uncertainty, cosmic microwave background primary fluctuations, instrumental noise, uncertainties in the determination of the velocity field, and effects introduced by miscentring of clusters and by uncertainties of the mass-observable relation (normalization and scatter). We show that available data (Planck maps and the MaxBCG catalogue) should deliver a 7.7? detection of the kSZ. A similar cluster catalogue with broader sky coverage should increase the detection significance to ˜13?. We point out that such measurements could be binned in order to study the properties of the cosmic gas and velocity fields, or combined into a single measurement to constrain cosmological parameters or deviations of the law of gravity from General Relativity.

  14. Optimal ensemble size of ensemble Kalman filter in sequential soil moisture data assimilation

    NASA Astrophysics Data System (ADS)

    Yin, Jifu; Zhan, Xiwu; Zheng, Youfei; Hain, Christopher R.; Liu, Jicheng; Fang, Li

    2015-08-01

    The ensemble Kalman filter (EnKF) has been extensively applied in sequential soil moisture data assimilation to improve the land surface model performance and in turn weather forecast capability. Usually, the ensemble size of EnKF is determined with limited sensitivity experiments. Thus, the optimal ensemble size may have never been reached. In this work, based on a series of mathematical derivations, we demonstrate that the maximum efficiency of the EnKF for assimilating observations into the models could be reached when the ensemble size is set to 12. Simulation experiments are designed in this study under ensemble size cases 2, 5, 12, 30, 50, 100, and 300 to support the mathematical derivations. All the simulations are conducted from 1 June to 30 September 2012 over southeast USA (from -90°W, 30°N to -80°W, 40°N) at 25 km resolution. We found that the simulations are perfectly consistent with the mathematical derivation. This optical ensemble size may have theoretical implications on the implementation of EnKF in other sequential data assimilation problems.

  15. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  16. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  17. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  18. arXiv:0912.4072v1[math.NA]21Dec2009 Stochastic global optimization as a filtering problem

    E-print Network

    Del Moral , Pierre

    arXiv:0912.4072v1[math.NA]21Dec2009 Stochastic global optimization as a filtering problem Panos a reformulation of stochastic global optimization as a fil- tering problem. The motivation behind algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection

  19. Optimizing the anode-filter combination in the sense of image quality and average glandular dose in digital mammography

    NASA Astrophysics Data System (ADS)

    Varjonen, Mari; Strömmer, Pekka

    2008-03-01

    This paper presents the optimized image quality and average glandular dose in digital mammography, and provides recommendations concerning anode-filter combinations in digital mammography, which is based on amorphous selenium (a-Se) detector technology. The full field digital mammography (FFDM) system based on a-Se technology, which is also a platform of tomosynthesis prototype, was used in this study. X-ray tube anode-filter combinations, which we studied, were tungsten (W) - rhodium (Rh) and tungsten (W) - silver (Ag). Anatomically adaptable fully automatic exposure control (AAEC) was used. The average glandular doses (AGD) were calculated using a specific program developed by Planmed, which automates the method described by Dance et al. Image quality was evaluated in two different ways: a subjective image quality evaluation, and contrast and noise analysis. By using W-Rh and W-Ag anode-filter combinations can be achieved a significantly lower average glandular dose compared with molybdenum (Mo) - molybdenum (Mo) or Mo-Rh. The average glandular dose reduction was achieved from 25 % to 60 %. In the future, the evaluation will concentrate to study more filter combinations and the effect of higher kV (>35 kV) values, which seems be useful while optimizing the dose in digital mammography.

  20. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: Spectral optimization and preliminary phantom measurement

    SciTech Connect

    Saito, Masatoshi

    2007-11-15

    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.

  1. Particle Swarm Optimization and Varying Chemotactic Step-Size Bacterial Foraging Optimization Algorithms Based Dynamic Economic Dispatch with Non-smooth Fuel Cost Functions

    NASA Astrophysics Data System (ADS)

    Praveena, P.; Vaisakh, K.; Rama Mohana Rao, S.

    The Dynamic economic dispatch (DED) problem is an optimization problem with an objective to determine the optimal combination of power outputs for all generating units over a certain period of time in order to minimize the total fuel cost while satisfying dynamic operational constraints and load demand in each interval. Recently social foraging behavior of Escherichia coli bacteria has been explored to develop a novel algorithm for distributed optimization and control. The Bacterial Foraging Optimization Algorithm (BFOA) is currently gaining popularity in the community of researchers, for its effectiveness in solving certain difficult real-world optimization problems. This article comes up with a hybrid approach involving Particle Swarm Optimization (PSO) and BFO algorithms with varying chemo tactic step size for solving the DED problem of generating units considering valve-point effects. The proposed hybrid algorithm has been extensively compared with those methods reported in the literature. The new method is shown to be statistically significantly better on two test systems consisting of five and ten generating units.

  2. Optimal Scaling of Filtered GRACE dS/dt Anomalies over Sacramento and San Joaquin River Basins, California

    NASA Astrophysics Data System (ADS)

    Ukasha, M.; Ramirez, J. A.

    2014-12-01

    Signals from Gravity Recovery and Climate Experiments (GRACE) twin satellites mission mapping the time invariant earth's gravity field are degraded due to measurement and leakage errors. Dampening of these errors using different filters results in a modification of the true geophysical signals. Therefore, use of a scale factor is suggested to recover the modified signals. For basin averaged dS/dt anomalies computed from data available at University of Colorado GRACE data analysis website - http://geoid.colorado.edu/grace/, optimal time invariant and time variant scale factors for Sacramento and San Joaquin river basins, California, are derived using observed precipitation (P), runoff (Q) and evapotranspiration (ET). Using the derived optimal scaling factor for GRACE data filtered using a 300 km- wide gaussian filter resulted in scaled GRACE dS/dt anomalies that match better with observed dS/dt anomalies (P-ET-Q) as compared to the GRACE dS/dt anomalies computed from scaled GRACE product at University of Colorado GRACE data analysis website. This paper will present the procedure, the optimal values, and the statistical analysis of the results.

  3. Optimization of a Prism-Mirror Imaging Energy Filter for High-Resolution Microanalysis in Electron Microscopy.

    NASA Astrophysics Data System (ADS)

    Jiang, Xun-Gao

    1995-01-01

    The energy resolution of a prism-mirror-prism (PMP) imaging energy filter, used for electron energy loss microanalysis, is limited by the aperture aberrations of its magnetic prism. The aberrations can be minimized by appropriately curving the pole-faces of the prism. In this thesis a computer-aided design procedure is described for optimizing the curvatures. The procedure accurately takes into account the influence of fringing fields on the optical properties of the prism and allows a realistic performance evaluation. An optimized PMP filter with an improved resolution has been developed in this way. For example, at an incident electron energy of 80 keV and an acceptance half-angle of 10 mradian, the filter has a resolution of 1.3 eV, a factor of 18 better than that of an equivalent system with a straight-face prism. The validity of the filter design depends on the correct determination of fringing magnetic fields. To verify the theoretical field calculations, a oscillating -loop magnetometer has been built. The device has a linear spatial resolution of 0.1 mm, and is well suited for measuring rapidly decreasing fringing fields. The measured fringing field distribution is in good agreement with the theoretical calculations within a maximum discrepancy of +/- 1% B_0, with B_0 being the uniform flux density inside the prism. The new PMP filter has been constructed and installed on a Siemens EM-102 microscope in our laboratory. Under the experimental conditions of an operating voltage of 60 kV and an acceptance half-angle of 8.5 mradian, the resolution of the filter is 0.5 eV, defined as the measured full-width-at-half-maximum of the intensity distribution of the aberration figure on the energy selecting plane. The much improved energy resolution of the optimized PMP imaging filter has made it possible to explore an exciting area of electron energy loss microanalysis, the detection and localization of molecular compounds by their characteristic excitations. A preliminary study, using embedded hematin (a chromophore) crystals as test specimens, has clearly demonstrated the feasibility of this technique in the presence of beam-induced radiation damage.

  4. A multiobjective optimization approach for combating Aedes aegypti using chemical and biological alternated step-size control.

    PubMed

    Dias, Weverton O; Wanner, Elizabeth F; Cardoso, Rodrigo T N

    2015-11-01

    Dengue epidemics, one of the most important viral disease worldwide, can be prevented by combating the transmission vector Aedes aegypti. In support of this aim, this article proposes to analyze the Dengue vector control problem in a multiobjective optimization approach, in which the intention is to minimize both social and economic costs, using a dynamic mathematical model representing the mosquitoes' population. It consists in finding optimal alternated step-size control policies combining chemical (via application of insecticides) and biological control (via insertion of sterile males produced by irradiation). All the optimal policies consists in apply insecticides just at the beginning of the season and, then, keep the mosquitoes in an acceptable level spreading into environment a few amount of sterile males. The optimization model analysis is driven by the use of genetic algorithms. Finally, it performs a statistic test showing that the multiobjective approach is effective in achieving the same effect of variations in the cost parameters. Then, using the proposed methodology, it is possible to find, in a single run, given a decision maker, the optimal number of days and the respective amounts in which each control strategy must be applied, according to the tradeoff between using more insecticide with less transmission mosquitoes or more sterile males with more transmission mosquitoes. PMID:26362231

  5. Characterization and optimization of acoustic filter performance by experimental design methodology.

    PubMed

    Gorenflo, Volker M; Ritter, Joachim B; Aeschliman, Dana S; Drouin, Hans; Bowen, Bruce D; Piret, James M

    2005-06-20

    Acoustic cell filters operate at high separation efficiencies with minimal fouling and have provided a practical alternative for up to 200 L/d perfusion cultures. However, the operation of cell retention systems depends on several settings that should be adjusted depending on the cell concentration and perfusion rate. The impact of operating variables on the separation efficiency performance of a 10-L acoustic separator was characterized using a factorial design of experiments. For the recirculation mode of separator operation, bioreactor cell concentration, perfusion rate, power input, stop time and recirculation ratio were studied using a fractional factorial 2(5-1) design, augmented with axial and center point runs. One complete replicate of the experiment was carried out, consisting of 32 more runs, at 8 runs per day. Separation efficiency was the primary response and it was fitted by a second-order model using restricted maximum likelihood estimation. By backward elimination, the model equation for both experiments was reduced to 14 significant terms. The response surface model for the separation efficiency was tested using additional independent data to check the accuracy of its predictions, to explore robust operation ranges and to optimize separator performance. A recirculation ratio of 1.5 and a stop time of 2 s improved the separator performance over a wide range of separator operation. At power input of 5 W the broad range of robust high SE performance (95% or higher) was raised to over 8 L/d. The reproducible model testing results over a total period of 3 months illustrate both the stable separator performance and the applicability of the model developed to long-term perfusion cultures. PMID:15858795

  6. Toward an Optimal Position for IVC Filters: Computational Modeling of the Impact of Renal Vein Inflow

    SciTech Connect

    Wang, S L; Singer, M A

    2009-07-13

    The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.

  7. Numerical experiment optimization to obtain the characteristics of the centrifugal pump steps package

    NASA Astrophysics Data System (ADS)

    Boldyrev, S. V.; Boldyrev, A. V.

    2014-12-01

    The numerical simulation method of turbulent flow in a running space of the working-stage in a centrifugal pump using the periodicity conditions has been formulated. The proposed method allows calculating the characteristic indices of one pump step at a lower computing resources cost. The comparison of the pump characteristics' calculation results with pilot data has been conducted.

  8. Layout Optimization Method for Magnetic Circuit using Multi-step Utilization of Genetic Algorithm Combined with Design Space Reduction

    NASA Astrophysics Data System (ADS)

    Okamoto, Yoshifumi; Tominaga, Yusuke; Sato, Shuji

    The layout optimization with the ON-OFF information of magnetic material in finite elements is one of the most attractive tools in initial conceptual and practical design of electrical machinery for engineers. The heuristic algorithms based on the random search allow the engineers to define the general-purpose objects, however, there are many iterations of finite element analysis, and it is difficult to realize the practical solution without island and void distribution by using direct search method, for example, simulated annealing (SA), genetic algorithm (GA), and so on. This paper presents the layout optimization method based on GA. Proposed method can arrive at the practical solution by means of multi-step utilization of GA, and the convergence speed is considerably improved by using the combination with the reduction process of design space.

  9. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    PubMed Central

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  10. SU-E-I-57: Evaluation and Optimization of Effective-Dose Using Different Beam-Hardening Filters in Clinical Pediatric Shunt CT Protocol

    SciTech Connect

    Gill, K; Aldoohan, S; Collier, J

    2014-06-01

    Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measure CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.

  11. Optimal filter design for shielded and unshielded ambient noise reduction in fetal magnetocardiography

    NASA Astrophysics Data System (ADS)

    Comani, S.; Mantini, D.; Alleva, G.; Di Luzio, S.; Romani, G. L.

    2005-12-01

    The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz.

  12. Optimizing Data Measurements at Test Beds Using Multi-Step Genetic Algorithms

    E-print Network

    Zell, Andreas

    the engine in an optimal way. Statistical Design of Experiments (DOE) reduces the set of measuring points and the relative air mass flow ramf, results in oscillations of the total engine system. The measurements can only are changed. Our goal is therefore to minimize the oscillations A. Mitterer, BMW Group, D-80788 M

  13. Optimization of 3D laser scanning speed by use of combined variable step

    NASA Astrophysics Data System (ADS)

    Garcia-Cruz, X. M.; Sergiyenko, O. Yu.; Tyrsa, Vera; Rivas-Lopez, M.; Hernandez-Balbuena, D.; Rodriguez-Quiñonez, J. C.; Basaca-Preciado, L. C.; Mercorelli, P.

    2014-03-01

    The problem of 3D TVS slow functioning caused by constant small scanning step becomes its solution in the presented research. It can be achieved by combined scanning step application for the fast search of n obstacles in unknown surroundings. Such a problem is of keynote importance in automatic robot navigation. To maintain a reasonable speed robots must detect dangerous obstacles as soon as possible, but all known scanners able to measure distances with sufficient accuracy are unable to do it in real time. So, the related technical task of the scanning with variable speed and precise digital mapping only for selected spatial sectors is under consideration. A wide range of simulations in MATLAB 7.12.0 of several variants of hypothetic scenes with variable n obstacles in each scene (including variation of shapes and sizes) and scanning with incremented angle value (0.6° up to 15°) is provided. The aim of such simulation was to detect which angular values of interval still permit getting the maximal information about obstacles without undesired time losses. Three of such local maximums were obtained in simulations and then rectified by application of neuronal network formalism (Levenberg-Marquradt Algorithm). The obtained results in its turn were applied to MET (Micro-Electro-mechanical Transmission) design for practical realization of variable combined step scanning on an experimental prototype of our previously known laser scanner.

  14. Determining the optimal system-specific cut-off frequencies for filtering in-vitro upper extremity impact force and acceleration data by residual analysis.

    PubMed

    Burkhart, Timothy A; Dunning, Cynthia E; Andrews, David M

    2011-10-13

    The fundamental nature of impact testing requires a cautious approach to signal processing, to minimize noise while preserving important signal information. However, few recommendations exist regarding the most suitable filter frequency cut-offs to achieve these goals. Therefore, the purpose of this investigation is twofold: to illustrate how residual analysis can be utilized to quantify optimal system-specific filter cut-off frequencies for force, moment, and acceleration data resulting from in-vitro upper extremity impacts, and to show how optimal cut-off frequencies can vary based on impact condition intensity. Eight human cadaver radii specimens were impacted with a pneumatic impact testing device at impact energies that increased from 20J, in 10J increments, until fracture occurred. The optimal filter cut-off frequency for pre-fracture and fracture trials was determined with a residual analysis performed on all force and acceleration waveforms. Force and acceleration data were filtered with a dual pass, 4th order Butterworth filter at each of 14 different cut-off values ranging from 60Hz to 1500Hz. Mean (SD) pre-fracture and fracture optimal cut-off frequencies for the force variables were 605.8 (82.7)Hz and 513.9 (79.5)Hz, respectively. Differences in the optimal cut-off frequency were also found between signals (e.g. Fx (medial-lateral), Fy (superior-inferior), Fz (anterior-posterior)) within the same test. These optimal cut-off frequencies do not universally agree with the recommendations of filtering all upper extremity impact data using a cut-off frequency of 600Hz. This highlights the importance of quantifying the filter frequency cut-offs specific to the instrumentation and experimental set-up. Improper digital filtering may lead to erroneous results and a lack of standardized approaches makes it difficult to compare findings of in-vitro dynamic testing between laboratories. PMID:21903214

  15. Optimization of a femtosecond Ti : sapphire amplifier using a acouto-optic programmable dispersive filter and a genetic algorithm.

    SciTech Connect

    Korovyanko, O. J.; Rey-de-Castro, R.; Elles, C. G.; Crowell, R. A.; Li, Y.

    2006-01-01

    The temporal output of a Ti:Sapphire laser system has been optimized using an acousto-optic programmable dispersive filter and a genetic algorithm. In-situ recording the evolution of spectral phase, amplitude and temporal pulse profile for each iteration of the algorithm using SPIDER shows that we are able to lock the spectral phase of the laser pulse within a narrow margin. By using the second harmonic of the CPA laser as feedback for the genetic algorithm, it has been demonstrated that severe mismatch between the compressor and stretcher can be compensated for in a short period of time.

  16. Metrics For Comparing Plasma Mass Filters

    SciTech Connect

    Abraham J. Fetterman and Nathaniel J. Fisch

    2012-08-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter. __________________________________________________

  17. A comparison of reanalysis techniques: applying optimal interpolation and Ensemble Kalman Filtering to improve air quality monitoring at mesoscale.

    PubMed

    Candiani, Gabriele; Carnevale, Claudio; Finzi, Giovanna; Pisoni, Enrico; Volta, Marialuisa

    2013-08-01

    To fulfill the requirements of the 2008/50 Directive, which allows member states and regional authorities to use a combination of measurement and modeling to monitor air pollution concentration, a key approach to be properly developed and tested is the data assimilation one. In this paper, with a focus on regional domains, a comparison between optimal interpolation and Ensemble Kalman Filter is shown, to stress pros and drawbacks of the two techniques. These approaches can be used to implement a more accurate monitoring of the long-term pollution trends on a geographical domain, through an optimal combination of all the available sources of data. The two approaches are formalized and applied for a regional domain located in Northern Italy, where the PM10 level which is often higher than EU standard limits is measured. PMID:23639906

  18. Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-dimensional Bilateral Filter

    SciTech Connect

    Bethel, E. Wes; Bethel, E. Wes

    2012-01-06

    This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.

  19. Influence of simulation time-step (temporal-scale) on optimal parameter estimation and runoff prediction performance in hydrological modeling

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel

    2015-04-01

    Nowadays, most hydrological catchment models are designed to allow their use for streamflow simulation at different time-scales. While this permits models to be applied for broader purposes, it can also be a source of error in hydrological processes simulation at catchment scale. Those errors seem not to affect significantly simple conceptual models, but this flexibility may lead to large behavior errors in physically based models. Equations used in processes such as those related to soil moisture time-variation are usually representative at certain time-scales but they may not characterize properly water transfer in soil layers at larger scales. This effect is especially relevant as we move from detailed hourly scale to daily time-step, which are common time scales used at catchment streamflow simulation for different research and management practices purposes. This study aims to provide an objective methodology to identify the degree of similarity of optimal parameter values when hydrological catchment model calibration is developed at different time-scales. Thus, providing information for an informed discussion of physical parameter significance on hydrological models. In this research, we analyze the influence of time scale simulation on: 1) the optimal values of six highly sensitive parameters of the TOPLATS model and 2) the streamflow simulation efficiency, while optimization is carried out at different time scales. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) has been applied on its lumped version on three catchments of varying size located in northern Spain. The model has its basis on shallow groundwater gradients (related to local topography) that set up spatial patterns of soil moisture and are assumed to control infiltration and runoff during storm events and evaporation and drainage in between storm events. The model calculates the saturated portion of the catchment at each time step based on Topographical Index (TI) intervals. Surface runoff is then calculated at rainfall events proportionally to the saturation degree of the catchment. Separately, baseflow is calculated based on the distance between catchment average water table depth and specific depth at each TI interval. This study focuses on the comparison of hourly and daily simulations for the 2000-2007 time period. An optimization algorithm has been applied to identify the optimal values of the following four soil properties: 1) Brooks-Corey pore size distribution index (?), 2) Bubbling pressure (?c), 3) Saturated soil moisture (?s), 4) Surface saturated hydraulic conductivity (Ks), and two subsurface flow controlling parameters: 1) Subsurface flow at complete saturation (Q0), and 2) Exponential coefficient for TOPMODEL baseflow equation (f). The algorithm was set up to maximize Nash-Sutcliffe Efficiency (NSE) at the catchment outlet. Results presented include the optimal values of each parameter at both hourly and daily time scale. These values provided valuable information to discuss the relative importance of each soil-related model parameter for enhanced streamflow simulation and adequate model response to both surface runoff and baseflow simulation. Catchment baseflow magnitude (Q0) and decay behavior (f) are also proved to require detailed analysis depending on the selected hydrological modeling purpose and corresponding selected time-step. Obtained results showed that different time-scale simulations may require different parameter values for soil properties and catchment behavior characterization in order to properly simulate streamflow at catchment scale. Despite calibrated parameters were soil properties and water flow quantities with physical meaning and defined units, optimum values differed with time-scale and were not always similar to field observations.

  20. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.

  1. Optimization of conditions for the single step IMAC purification of miraculin from Synsepalum dulcificum.

    PubMed

    He, Zuxing; Tan, Joo Shun; Lai, Oi Ming; Ariff, Arbakariya B

    2015-08-15

    In this study, the methods for extraction and purification of miraculin from Synsepalum dulcificum were investigated. For extraction, the effect of different extraction buffers (phosphate buffer saline, Tris-HCl and NaCl) on the extraction efficiency of total protein was evaluated. Immobilized metal ion affinity chromatography (IMAC) with nickel-NTA was used for the purification of the extracted protein, where the influence of binding buffer pH, crude extract pH and imidazole concentration in elution buffer upon the purification performance was explored. The total amount of protein extracted from miracle fruit was found to be 4 times higher using 0.5M NaCl as compared to Tris-HCl and phosphate buffer saline. On the other hand, the use of Tris-HCl as binding buffer gave higher purification performance than sodium phosphate and citrate-phosphate buffers in IMAC system. The optimum purification condition of miraculin using IMAC was achieved with crude extract at pH 7, Tris-HCl binding buffer at pH 7 and the use of 300 mM imidazole as elution buffer, which gave the overall yield of 80.3% and purity of 97.5%. IMAC with nickel-NTA was successfully used as a single step process for the purification of miraculin from crude extract of S. dulcificum. PMID:25794715

  2. Development of a Transcatheter Tricuspid Valve Prosthesis Through Steps of Iterative Optimization and Finite Element Analysis.

    PubMed

    Pott, Desiree; Kütting, Maximilian; Zhong, Zhaoyang; Amerini, Andrea; Spillner, Jan; Autschbach, Rüdiger; Steinseifer, Ulrich

    2015-10-01

    The development of a transcatheter tricuspid valve prosthesis for the treatment of tricuspid regurgitation (TR) is presented. The design process involves an iterative development method based on computed tomography data and different steps of finite element analysis (FEA). The enhanced design consists of two self-expandable stents, one is placed inside the superior vena cava (SVC) for primary device anchoring, the second lies inside the tricuspid valve annulus (TVA). Both stents are connected by flexible connecting struts (CS) to anchor the TVA-stent in the orthotopic position. The iterative development method includes the expansion and crimping of the stents and CS with FEA. Leaflet performance and leaflet-stent interaction were studied by applying the physiologic pressure cycle of the right heart onto the leaflet surfaces. A previously implemented nitinol material model and a new porcine pericardium material model derived from uniaxial tensile tests were used. Maximum strains/stresses were approx. 6.8% for the nitinol parts and 2.9?MPa for the leaflets. Stent displacement because of leaflet movement was ?1.8?mm at the commissures and the coaptation height was 1.6-3?mm. This led to an overall good performance of the prosthesis. An anatomic study showed a good anatomic fit of the device inside the human right heart. PMID:26378868

  3. Optimal optical filters of fluorescence excitation and emission for poultry fecal detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Purpose: An analytic method to design excitation and emission filters of a multispectral fluorescence imaging system is proposed and was demonstrated in an application to poultry fecal inspection. Methods: A mathematical model of a multispectral imaging system is proposed and its system parameters, ...

  4. Rod-filter-field optimization of the J-PARC RF-driven H{sup ?} ion source

    SciTech Connect

    Ueno, A. Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2015-04-08

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H{sup ?} ion beam of 60mA within normalized emittances of 1.5?mm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500?s×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H{sup ?} ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H{sup ?} ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM’s gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H{sup ?} ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM’s cross-section (magnetmotive force) was indispensable for easy operation with the temperature (T{sub PE}) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM’s cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for T{sub PE} around 60°C.

  5. Design of FIR digital filters for pulse shaping and channel equalization using time-domain optimization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Vaughn, G. L.

    1974-01-01

    Three algorithms are developed for designing finite impulse response digital filters to be used for pulse shaping and channel equalization. The first is the Minimax algorithm which uses linear programming to design a frequency-sampling filter with a pulse shape that approximates the specification in a minimax sense. Design examples are included which accurately approximate a specified impulse response with a maximum error of 0.03 using only six resonators. The second algorithm is an extension of the Minimax algorithm to design preset equalizers for channels with known impulse responses. Both transversal and frequency-sampling equalizer structures are designed to produce a minimax approximation of a specified channel output waveform. Examples of these designs are compared as to the accuracy of the approximation, the resultant intersymbol interference (ISI), and the required transmitted energy. While the transversal designs are slightly more accurate, the frequency-sampling designs using six resonators have smaller ISI and energy values.

  6. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  7. Development of a Design Tool for Flow Rate Optimization in the Tata Swach Water Filter

    E-print Network

    Ricks, Sean T.

    When developing a first-generation product, an iterative approach often yields the shortest time-to-market. In order to optimize its performance, however, a fundamental understanding of the theory governing its operation ...

  8. Optimizing flow rate and bacterial removal performance of ceramic pot filters in Tamale, Ghana

    E-print Network

    Zhang, Yiyue, S.M. Massachusetts Institute of Technology

    2015-01-01

    Pure Home Water (PHW) is an organization that seeks to improve the drinking water quality for those who do not have access to clean water in Northern Ghana. This study focuses on the further optimization of ceramic pot ...

  9. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1991-01-01

    Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  10. A K edge filter technique for optimization of the coherent-to-Compton scatter ratio method.

    PubMed

    Harding, G; Armstrong, R; McDaid, S; Cooper, M J

    1995-12-01

    The ratio method involves forming the ratio of the elastic to inelastic x-ray scatter signals from a localized region of a scattering medium to determine its mean atomic number. An analysis is presented of two major error sources influencing the ratio method: firstly statistical (photon) noise and secondly multiple scattering and self-attenuation of the primary and scatter radiations in the medium. It is shown that a forward scattering geometry minimizes errors of both types for substances composed of elements with low and medium atomic number. However, owing to the small energy separation (approximately 100 eV) of coherent and Compton scatter for this geometry, they cannot be distinguished directly with semiconductor (e.g., Ge) detectors. A novel K edge filter technique is described which permits separation of the elastic and Compton signals in the forward-scatter geometry. The feasibility of this method is demonstrated by experimental results obtained with Ta fluorescence radiation provided by a fluorescent x-ray source filtered with an Er foil. The extension of this technique to the "in vivo" measurement of low momentum transfer inelastic scattering from biological tissues, possibly providing useful diagnostic information, is briefly discussed. PMID:8746705

  11. EOP prediction using least square fitting and autoregressive filter over optimized data intervals

    NASA Astrophysics Data System (ADS)

    Xu, XueQing; Zhou, YongHong

    2015-11-01

    This study firstly employs the calculation of base sequence with different length, in 1-90 day predictions of EOP (the UT1-UTC and polar motion), by the combined method of least squares and autoregressive model, and find the base sequence with best result for different prediction spans, which we call as "predictions over optimized data intervals". Compared to the EOP predictions with fixed base data intervals, the "predictions over optimized data intervals" performs better for the prediction of UT1-UTC, and shows a significant improvement for the prediction of polar motion, and particularly promotes our competitive level in the international activity of Earth Orientation Parameters Combination of Prediction Pilot Project.

  12. Optimal spatial filtering and transfer function for SAR ocean wave spectra

    NASA Technical Reports Server (NTRS)

    Beal, R. C.; Tilley, D. G.

    1981-01-01

    The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.

  13. Robustness issues in Kalman filtering

    E-print Network

    Ruckdeschel, Peter

    Robustness issues in Kalman filtering revisited Peter Ruckdeschel Fraunhofer ITWM, Abteilung possible with delay 3 Classical Method: Kalman­Filter Filter Problem E xt - ft(y1:t) 2 = minft !, with y1:t = (y1, . . . , yt), y1:0 := Kalman­Filter optimal solution among linear filters -- Kalman[/Bucy] [60

  14. Shuttle filter study. Volume 1: Characterization and optimization of filtration devices

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A program to develop a new technology base for filtration equipment and comprehensive fluid particulate contamination management techniques was conducted. The study has application to the systems used in the space shuttle and space station projects. The scope of the program is as follows: (1) characterization and optimization of filtration devices, (2) characterization of contaminant generation and contaminant sensitivity at the component level, and (3) development of a comprehensive particulate contamination management plane for space shuttle fluid systems.

  15. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  16. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  17. Optimal cut-off points for two-step strategy in screening of undiagnosed diabetes: a population-based study in China.

    PubMed

    Ye, Zhen; Cong, Liming; Ding, Gangqiang; Yu, Min; Zhang, Xinwei; Hu, Ruying; Wu, Jianjun; Fang, Le; Wang, Hao; Zhang, Jie; He, Qingfang; Su, Danting; Zhao, Ming; Wang, Lixin; Gong, Weiwei; Xiao, Yuanyuan; Liang, Mingbin; Pan, Jin

    2014-01-01

    To identify optimal cut-off points of fasting plasma glucose for two-step strategy in screening of undiagnosed diabetes in Chinese people, data were selected from two cross-sectional studies of Metabolic Syndrome in Zhejiang Province of China, Zhejiang Statistical Yearbook (2010), and published literatures. Two-step strategy was used among 17437 subjects sampled from population to screen undiagnosed diabetes. Effectiveness (proportion of cases identified), costs (including medical and non-medical costs), and efficiency (cost per case identified) of these different two-step screening strategies were evaluated. This study found the sensitivities of all the two-step screening strategies with further Oral Glucose Tolerance Test (OGTT) at different Fasting Plasma Glucose (FPG) cut-off points from 5.0 to 7.0 (mmol/L) ranged from 0.66 to 0.91. For the FPG point of 5.0 mmol/L, 91 percent of undiagnosed cases were identified. The total cost of detecting one undiagnosed diabetes case ranged from 547.1 to 1294.5 CNY/case, and the strategy with FPG at cut-off point of 6.1 (mmol/L) resulted in the least cost. Considering both sensitivity and cost of screening diabetes, FPG cut-off point at 5.4 mmol/L was optimized for the two-step strategy. In conclusion, different optimal cut-off points of FPG for two-step strategy in screening of undiagnosed diabetes should be used for different screening purposes. PMID:24609110

  18. Variational Particle Filter for Imperfect Models

    NASA Astrophysics Data System (ADS)

    Baehr, C.

    2012-12-01

    Whereas classical data processing techniques work with perfect models geophysical sciences have to deal with imperfect models with spatially structured errors. For the perfect model cases, in terms of Mean-Field Markovian processes, the optimal filter is known: the Kalman estimator is the answer to the linearGaussian problem and in the general case Particle approximations are the empirical solutions to the optimal estimator. We will present another way to decompose the Bayes rule, using an one step ahead observation. This method is well adapted to the strong nonlinear or chaotic systems. Then, in order to deal with imperfect model, we suggest in this presentation to learn the (large scale) model errors using a variational correction before the resampling step of the non-linear filtering. This procedure replace the a-priori Markovian transition by a kernel conditioned to the observations. This supplementary step may be read as the use of variational particles approximation. For the numerical applications, we have chosen to show the impact of our method, first on a simple marked Poisson process with Gaussian observation noises (the time-exponential jumps are considered as model errors) and then on a 2D shallow water experiment in a closed basin, with some falling droplets as model errors.; Marked Poisson process with Gaussian observation noise filtered by four methods: classical Kalman filter, genetic particle filter, trajectorial particle filter and Kalman-particle filter. All use only 10 particles. ; 2D Shallow Water simulation with droplets errors. Results of a classical 3DVAR and of our VarPF (10 particles).

  19. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  20. Nonlinear Attitude Filtering Methods

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Crassidis, John L.; Cheng, Yang

    2005-01-01

    This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.

  1. An optimal modeling of multidimensional wave digital filtering network for free vibration analysis of symmetrically laminated composite FSDT plates

    NASA Astrophysics Data System (ADS)

    Tseng, Chien-Hsun

    2015-02-01

    The technique of multidimensional wave digital filtering (MDWDF) that builds on traveling wave formulation of lumped electrical elements, is successfully implemented on the study of dynamic responses of symmetrically laminated composite plate based on the first order shear deformation theory. The philosophy applied for the first time in this laminate mechanics relies on integration of certain principles involving modeling and simulation, circuit theory, and MD digital signal processing to provide a great variety of outstanding features. Especially benefited by the conservation of passivity gives rise to a nonlinear programming problem (NLP) for the issue of numerical stability of a MD discrete system. Adopting the augmented Lagrangian genetic algorithm, an effective optimization technique for rapidly achieving solution spaces of NLP models, numerical stability of the MDWDF network is well received at all time by the satisfaction of the Courant-Friedrichs-Levy stability criterion with the least restriction. In particular, optimum of the NLP has led to the optimality of the network in terms of effectively and accurately predicting the desired fundamental frequency, and thus to give an insight into the robustness of the network by looking at the distribution of system energies. To further explore the application of the optimum network, more numerical examples are engaged in efforts to achieve a qualitative understanding of the behavior of the laminar system. These are carried out by investigating various effects based on different stacking sequences, stiffness and span-to-thickness ratios, mode shapes and boundary conditions. Results are scrupulously validated by cross referencing with early published works, which show that the present method is in excellent agreement with other numerical and analytical methods.

  2. On the difficulty to optimally implement the Ensemble Kalman filter: An experiment based on many hydrological models and catchments

    NASA Astrophysics Data System (ADS)

    Thiboult, A.; Anctil, F.

    2015-10-01

    Forecast reliability and accuracy is a prerequisite for successful hydrological applications. This aim may be attained by using data assimilation techniques such as the popular Ensemble Kalman filter (EnKF). Despite its recognized capacity to enhance forecasting by creating a new set of initial conditions, implementation tests have been mostly carried out with a single model and few catchments leading to case specific conclusions. This paper performs an extensive testing to assess ensemble bias and reliability on 20 conceptual lumped models and 38 catchments in the Province of Québec with perfect meteorological forecast forcing. The study confirms that EnKF is a powerful tool for short range forecasting but also that it requires a more subtle setting than it is frequently recommended. The success of the updating procedure depends to a great extent on the specification of the hyper-parameters. In the implementation of the EnKF, the identification of the hyper-parameters is very unintuitive if the model error is not explicitly accounted for and best estimates of forcing and observation error lead to overconfident forecasts. It is shown that performance are also related to the choice of updated state variables and that all states variables should not systematically be updated. Additionally, the improvement over the open loop scheme depends on the watershed and hydrological model structure, as some models exhibit a poor compatibility with EnKF updating. Thus, it is not possible to conclude in detail on a single ideal manner to identify an optimal implementation; conclusions drawn from a unique event, catchment, or model are likely to be misleading since transferring hyper-parameters from a case to another may be hazardous. Finally, achieving reliability and bias jointly is a daunting challenge as the optimization of one score is done at the cost of the other.

  3. The Mauna Kea Observatories Near-Infrared Filter Set. I: Defining Optimal 1-5 $?$m Bandpasses

    E-print Network

    D. A. Simons; A. T. Tokunaga

    2001-11-07

    A new MKO-NIR infrared filter set is described, including techniques and considerations given to designing a new set of bandpasses that are useful at both mid- and high-altitude sites. These filters offer improved photometric linearity and in many cases reduced background, as well as preserve good throughput within the JHKLM atmospheric windows. MKO-NIR filters have already been deployed with a number of instruments around the world as part of a filter consortium purchase to reduce the unit cost of filters. Through this effort we hope to establish, for the first time, a single standard set of infrared fitlers at as many observatories as possible.

  4. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  5. Filter and method of fabricating

    DOEpatents

    Janney, Mark A.

    2006-02-14

    A method of making a filter includes the steps of: providing a substrate having a porous surface; applying to the porous surface a coating of dry powder comprising particles to form a filter preform; and heating the filter preform to bind the substrate and the particles together to form a filter.

  6. Two-step biodiesel production from Calophyllum inophyllum oil: optimization of modified ?-zeolite catalyzed pre-treatment.

    PubMed

    SathyaSelvabala, Vasanthakumar; Selvaraj, Dinesh Kirupha; Kalimuthu, Jalagandeeswaran; Periyaraman, Premkumar Manickam; Subramanian, Sivanesan

    2011-01-01

    In this study, a two-step process was developed to produce biodiesel from Calophyllum inophyllum oil. Pre-treatment with phosphoric acid modified ?-zeolite in acid catalyzed esterification process preceded by transesterification which was done using conventional alkali catalyst potassium hydroxide (KOH). The objective of this study is to investigate the relationship between the reaction temperatures, reaction time and methanol to oil molar ratio in the pre-treatment step. Central Composite Design (CCD) and Response Surface Methodology (RSM) were utilized to determine the best operating condition for the pre-treatment step. Biodiesel produced by this process was tested for its fuel properties. PMID:20833536

  7. Optimization of the Cathode Catalyst Layer Composition of a PEM Fuel Cell Using a Novel 2-Step Preparation Method

    E-print Network

    Friedmann, Roland

    2009-03-05

    mixture of Nafion® ionomer and catalyst particles was annealed to form ionomer coated catalyst particles. In the second step, these ionomer coated catalyst particles were mixed with nano-sized Teflon® particles and additional Nafion® ionomer, which...

  8. A Transmission-Filter Coronagraph: Design and Test

    E-print Network

    Ren, Deqing; Zhu, Yongtian

    2015-01-01

    We propose a transmission-filter coronagraph for direct imaging of Jupiter-like exoplanets with ground-based telescopes. The coronagraph is based on a transmission filter that consists of finite number of transmission steps. A discrete optimization algorithm is proposed for the design of the transmission filter that is optimized for ground-based telescopes with central obstructions and spider structures.We discussed the algorithm that is applied for our coronagraph design. To demonstrate the performance of the coronagraph, a filter was manufactured and laboratory tests were conducted. The test results show that the coronagraph can achieve a high contrast of 10 to -6.5 at an inner working angle of 5{\\lambda}/D, which indicates that our coronagraph can be immediately used for the direct imaging of Jupiter-like exoplanets with ground-based telescopes.

  9. [Reduction of livestock-associated methicillin-resistant staphylococcus aureus (LA-MRSA) in the exhaust air of two piggeries by a bio-trickling filter and a biological three-step air cleaning system].

    PubMed

    Clauss, Marcus; Schulz, Jochen; Stratmann-Selke, Janin; Decius, Maja; Hartung, Jörg

    2013-01-01

    "Livestock-associated" Methicillin-resistent Staphylococcus aureus (LA-MRSA) are frequently found in the air of piggeries, are emitted into the ambient air of the piggeries and may also drift into residential areas or surrounding animal husbandries.. In order to reduce emissions from animal houses such as odour, gases and dust different biological air cleaning systems are commercially available. In this study the retention efficiencies for the culturable LA-MRSA of a bio-trickling filter and a combined three step system, both installed at two different piggeries, were investigated. Raw gas concentrations for LA-MRSA of 2.1 x 10(2) cfu/m3 (biotrickling filter) and 3.9 x 10(2) cfu/m3 (three step system) were found. The clean gas concentrations were in each case approximately one power of ten lower. Both systems were able to reduce the number of investigated bacteria in the air of piggeries on average about 90%. The investigated systems can contribute to protect nearby residents. However, considerable fluctuations of the emissions can occur. PMID:23540196

  10. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  11. SU-E-T-23: A Novel Two-Step Optimization Scheme for Tandem and Ovoid (T and O) HDR Brachytherapy Treatment for Locally Advanced Cervical Cancer

    SciTech Connect

    Sharma, M; Todor, D; Fields, E

    2014-06-01

    Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 4–5 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with “excellent” CTV coverage, average D90 of 96% (range 91–102), sigmoid D2cc was reduced on average by 37% (range 16–73), bladder by 28% (range 20–47) and rectum by 27% (range 15–45). Similar reductions were obtained on plans with “good” coverage, with an average D90 of 93% (range 90–99). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 4–5 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.

  12. Multi-dimensional tensor-based adaptive filter (TBAF) for low dose x-ray CT

    NASA Astrophysics Data System (ADS)

    Knaup, Michael; Lebedev, Sergej; Sawall, Stefan; Kachelrieß, Marc

    2015-03-01

    Edge-preserving adaptive filtering within CT image reconstruction is a powerful method to reduce image noise and hence to reduce patient dose. However, highly sophisticated adaptive filters typically comprise many parameters which must be adjusted carefully in order to obtain optimal filter performance and to avoid artifacts caused by the filter. In this work we applied an anisotropic tensor-based adaptive image filter (TBAF) to CT image reconstruction, both as an image-based post-processing step, as well as a regularization step within an iterative reconstruction. The TBAF is a generalization of the filter of reference.1 Provided that the image noise (i.e. the variance) of the original image is known for each voxel, we adjust all filter parameters automatically. Hence, the TBAF can be applied to any individual CT dataset without user interaction. This is a crucial feature for a possible application in clinical routine. The TBAF is compared to a well-established adaptive bilateral filter using the same noise adjustment. Although the differences between both filters are subtle, edges and local structures emerge more clearly in the TBAF filtered images while anatomical details are less affected than by the bilateral filter.

  13. Unconditionally energy stable time stepping scheme for Cahn-Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    NASA Astrophysics Data System (ADS)

    Tavakoli, Rouhollah

    2016-01-01

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn-Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results.

  14. Evaluation and optimization of a reusable hollow fiber ultrafilter as a first step in concentrating Cryptosporidium parvum oocysts from water.

    PubMed

    Kuhn, R C; Oshima, K H

    2001-08-01

    Experiments with a small-scale hollow fiber ultrafiltration system (50,000 MWCO) was used to characterize the filtration process and identify conditions that optimize the recovery of Cryptosporidium parvum oocysts from 2 L samples of water. Seeded experiments were conducted using deionized water as well as four environmental water sources (tap, ground, Arkansas river, and Rio Grande river; 0-30.9NTU). Optimal and consistent recovery of spiked oocysts was observed (68-81%), when the membrane was sanitized with a 10% sodium dodecyl sulfate (SDS) solution and then blocked with 5% fetal bovine serum (FBS). PMID:11456179

  15. Security: Step by Step

    ERIC Educational Resources Information Center

    Svetcov, Eric

    2005-01-01

    This article provides a list of the essential steps to keeping a school's or district's network safe and sound. It describes how to establish a security architecture and approach that will continually evolve as the threat environment changes over time. The article discusses the methodology for implementing this approach and then discusses the…

  16. The University of Arizona College of Medicine Optimal Aging Program: Stepping in the Shadows of Successful Aging

    ERIC Educational Resources Information Center

    Sikora, Stephanie

    2006-01-01

    The Optimal Aging Program (OAP) at the University of Arizona, College of Medicine is a longitudinal mentoring program that pairs students with older adults who are considered to be aging "successfully." This credit-bearing elective was initially established in 2001 through a grant from the John A. Hartford Foundation, and aims to expand the…

  17. Optimization, physicochemical characterization and in vivo assessment of spray dried emulsion: A step toward bioavailability augmentation and gastric toxicity minimization.

    PubMed

    Mehanna, Mohammed M; Alwattar, Jana K; Elmaradny, Hoda A

    2015-12-30

    The limited solubility of BCS class II drugs diminishes their dissolution and thus reduces their bioavailability. Our aim in this study was to develop and optimize a spray dried emulsion containing indomethacin as a model for Class II drugs, Labrasol(®)/Transuctol(®) mixture as the oily phase, and maltodextrin as a solid carrier. The optimization was carried out using a 2(3) full factorial design based on two independent variables, the percentage of carrier and concentration of Poloxamer(®) 188. The effect of the studied parameters on the spray dried yield, loading efficiency and in vitro release were thoroughly investigated. Furthermore, physicochemical characterization of the optimized formulation was performed. In vivo bioavailability, ulcerogenic capability and histopathological features were assessed. The results obtained pointed out that poloxamer 188 concentration in the formulation was the predominant factor affecting the dissolution release, whereas the drug loading was driven by the carrier concentration added. Moreover, the yield demonstrated a drawback by increasing both independent variables studied. The optimized formulation presented a complete release within two minutes thus suggesting an immediate release pattern as well, the formulation revealed to be uniform spherical particles with an average size of 7.5?m entrapping the drug in its molecular state as demonstrated by the DSC and FTIR studies. The in vivo evaluation, demonstrated a 10-fold enhancement in bioavailability of the optimized formulation, with absence of ulcerogenic side effect compared to the marketed product. The results provided an evidence for the significance of spray dried emulsion as a leading strategy for improving the solubility and enhancing the bioavailability of class II drugs. PMID:26561726

  18. Estimating model parameters for an impact-produced shock-wave simulation: Optimal use of partial data with the extended Kalman filter

    SciTech Connect

    Kao, Jim . E-mail: kao@lanl.gov; Flicker, Dawn; Ide, Kayo; Ghil, Michael

    2006-05-20

    This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from a single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.

  19. Development of an optimal automatic control law and filter algorithm for steep glideslope capture and glideslope tracking

    NASA Technical Reports Server (NTRS)

    Halyo, N.

    1976-01-01

    A digital automatic control law to capture a steep glideslope and track the glideslope to a specified altitude is developed for the longitudinal/vertical dynamics of a CTOL aircraft using modern estimation and control techniques. The control law uses a constant gain Kalman filter to process guidance information from the microwave landing system, and acceleration from body mounted accelerometer data. The filter outputs navigation data and wind velocity estimates which are used in controlling the aircraft. Results from a digital simulation of the aircraft dynamics and the control law are presented for various wind conditions.

  20. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization

    PubMed Central

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742

  1. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 9, SEPTEMBER 2005 1563 SVD-Based Optimal Filtering for Noise Reduction

    E-print Network

    directional microphones in commercial hearing aids [6]­[8] or adaptive beamformers in the research area [9 Filtering for Noise Reduction in Dual Microphone Hearing Aids: A Real Time Implementation and Perceptual are being developed and implemented in hearing aids. Based on a single microphone signal the separation

  2. Synthesis and optimization of wide pore superficially porous particles by a one-step coating process for separation of proteins and monoclonal antibodies.

    PubMed

    Chen, Wu; Jiang, Kunqiang; Mack, Anne; Sachok, Bo; Zhu, Xin; Barber, William E; Wang, Xiaoli

    2015-10-01

    Superficially porous particles (SPPs) with pore size ranging from 90Å to 120Å have been a great success for the fast separation of small molecules over totally porous particles in recent years. However, for the separation of large biomolecules such as proteins, particles with large pore size (e.g. ? 300Å) are needed to allow unrestricted diffusion inside the pores. One early example is the commercial wide pore (300Å) SPPs in 5?m size introduced in 2001. More recently, wide pore SPPs (200Å and 400Å) in smaller particle sizes (3.5-3.6?m) have been developed to meet the need of increasing interest in doing faster analysis of larger therapeutic molecules by biopharmaceutical companies. Those SSPs in the market are mostly synthesized by the laborious layer-by-layer (LBL) method. A one step coating approach would be highly advantageous, offering potential benefits on process time, easier quality control, materials cost, and process simplicity for facile scale-up. A unique one-step coating process for the synthesis of SPPs called the "coacervation method" was developed by Chen and Wei as an improved and optimized process, and has been successfully applied to synthesis of a commercial product, Poroshell 120 particles, for small molecule separation. In this report, we would like to report on the most recent development of the one step coating coacervation method for the synthesis of a series of wide pore SPPs of different particle size, pore size, and shell thickness. The one step coating coacervation method was proven to be a universal method to synthesize SPPs of any particle size and pore size. The effects of pore size (300Å vs. 450Å), shell thickness (0.25?m vs. 0.50?m), and particle size (2.7?m and 3.5?m) on the separation of large proteins, intact and fragmented monoclonal antibodies (mAbs) were studied. Van Deemter studies using proteins were also conducted to compare the mass transfer properties of these particles. It was found that the larger pore size actually had more impact on the performance of mAbs than particle size and shell thickness. The SPPs with larger 3.5?m particle size and larger 450Å pore size showed the best resolution of mAbs and the lowest back pressure. To the best of our knowledge, this is the largest pore size made on SPPs. These results led to the optimal particle design with a particle size of 3.5?m, a thin shell of 0.25?m and a larger pore size of 450Å. PMID:26342871

  3. Characterization and optimization of 2-step MOVPE growth for single-mode DFB or DBR laser diodes

    NASA Astrophysics Data System (ADS)

    Bugge, F.; Mogilatenko, A.; Zeimer, U.; Brox, O.; Neumann, W.; Erbert, G.; Weyers, M.

    2011-01-01

    We have studied the MOVPE regrowth of AlGaAs over a grating for GaAs-based laser diodes with an internal wavelength stabilisation. Growth temperature and aluminium concentration in the regrown layers considerably affect the oxygen incorporation. Structural characterisation by transmission electron microscopy of the grating after regrowth shows the formation of quaternary InGaAsP regions due to the diffusion of indium atoms from the top InGaP layer and As-P exchange processes during the heating-up procedure. Additionally, the growth over such gratings with different facets leads to self-organisation of the aluminium content in the regrown AlGaAs layer, resulting in an additional AlGaAs grating, which has to be taken into account for the estimation of the coupling coefficient. With optimized growth conditions complete distributed feedback laser structures have been grown for different emission wavelengths. At 1062 nm a very high single-frequency output power of nearly 400 mW with a slope efficiency of 0.95 W/A for a 4 ?m ridge-waveguide was obtained.

  4. Optimal State Estimation for Cavity Optomechanical Systems

    NASA Astrophysics Data System (ADS)

    Wieczorek, Witlef; Hofer, Sebastian G.; Hoelscher-Obermaier, Jason; Riedinger, Ralf; Hammerer, Klemens; Aspelmeyer, Markus

    2015-06-01

    We demonstrate optimal state estimation for a cavity optomechanical system through Kalman filtering. By taking into account nontrivial experimental noise sources, such as colored laser noise and spurious mechanical modes, we implement a realistic state-space model. This allows us to obtain the conditional system state, i.e., conditioned on previous measurements, with a minimal least-squares estimation error. We apply this method to estimate the mechanical state, as well as optomechanical correlations both in the weak and strong coupling regime. The application of the Kalman filter is an important next step for achieving real-time optimal (classical and quantum) control of cavity optomechanical systems.

  5. Optimal state estimation for cavity optomechanical systems

    E-print Network

    Witlef Wieczorek; Sebastian G. Hofer; Jason Hoelscher-Obermaier; Ralf Riedinger; Klemens Hammerer; Markus Aspelmeyer

    2015-06-10

    We demonstrate optimal state estimation for a cavity optomechanical system through Kalman filtering. By taking into account nontrivial experimental noise sources, such as colored laser noise and spurious mechanical modes, we implement a realistic state-space model. This allows us to obtain the conditional system state, i.e., conditioned on previous measurements, with minimal least-square estimation error. We apply this method for estimating the mechanical state, as well as optomechanical correlations both in the weak and strong coupling regime. The application of the Kalman filter is an important next step for achieving real-time optimal (classical and quantum) control of cavity optomechanical systems.

  6. Relevance of a full-length genomic RNA standard and a thermal-shock step for optimal hepatitis delta virus quantification.

    PubMed

    Homs, Maria; Giersch, Katja; Blasi, Maria; Lütgehetmann, Marc; Buti, Maria; Esteban, Rafael; Dandri, Maura; Rodriguez-Frias, Francisco

    2014-09-01

    Hepatitis D virus (HDV) is a defective RNA virus that requires the surface antigens of hepatitis B virus (HBV) (HBsAg) for viral assembly and replication. Several commercial and in-house techniques have been described for HDV RNA quantification, but the methodologies differ widely, making a comparison of the results between studies difficult. In this study, a full-length genomic RNA standard was developed and used for HDV quantification by two different real-time PCR approaches (fluorescence resonance energy transfer [FRET] and TaqMan probes). Three experiments were performed. First, the stability of the standard was determined by analyzing the effect of thawing and freezing. Second, because of the strong internal base pairing of the HDV genome, which leads to a rod-like structure, the effect of intense thermal shock (95°C for 10 min and immediate cooling to -80°C) was tested to confirm the importance of this treatment in the reverse transcription step. Lastly, to investigate the differences between the DNA and RNA standards, the two types were quantified in parallel with the following results: the full-length genomic RNA standard was stable and reliably mimicked the behavior of HDV-RNA-positive samples, thermal shock enhanced the sensitivity of HDV RNA quantification, and the DNA standard underquantified the HDV RNA standard. These findings indicate the importance of using complete full-length genomic RNA and a strong thermal-shock step for optimal HDV RNA quantification. PMID:24989607

  7. Next Step for STEP

    SciTech Connect

    Wood, Claire; Bremner, Brenda

    2013-08-09

    The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.

  8. High accuracy motor controller for positioning optical filters in the CLAES Spectrometer

    NASA Astrophysics Data System (ADS)

    Thatcher, John B.

    The Etalon Drive Motor (EDM), a precision etalon control system designed for accurate positioning of etalon filters in the IR spectrometer of the Cryogenic Limb Array Etalon Spectrometer (CLAES) experiment is described. The EDM includes a brushless dc torque motor, which has an infinite resolution for setting an etalon filter to any desired angle, a four-filter etalon wheel, and an electromechanical resolver for angle information. An 18-bit control loop provides high accuracy, resolution, and stability. Dynamic computer interaction allows the user to optimize the step response. A block diagram of the motor controller is presented along with a schematic of the digital/analog converter circuit.

  9. Development and optimization of an analytical method for the determination of UV filters in suntan lotions based on microemulsion electrokinetic chromatography.

    PubMed

    Klampfl, Christian W; Leitner, Tanja; Hilder, Emily F

    2002-08-01

    Microemulsion electrokinetic chromatography (MEEKC) has been applied to the separation of some UV filters (Eusolex 4360, Eusolex 6300, Eusolex OCR, Eusolex 2292, Eusolex 6007, Eusolex 9020, Eusolex HMS, Eusolex OS, and Eusolex 232) commonly found in suntan lotions. The composition of the microemulsion employed was optimized with respect to the best possible separation of the selected analytes using artificial neural networks (ANNs). Two parameters namely the composition of the mixed surfactant system comprising the anionic sodium dodecyl sulfate (SDS) and neutral Brij 35 and the amount of organic modifier (2-propanol) present in the aqueous phase of the microemulsion were modeled. Using an optimized MEEKC buffer consisting of 2.25 g SDS, 0.75 g Brij 35, 6.6 g 1-butanol, 0.8 g n-octane, 17.5 g 2-propanol, and 72.1 g of 10 mM borate buffer (pH 9.2), eight target analytes could be separated in under 25 min employing a diode-array detector to segregate the overlapping signals obtained for Eusolex 9020 and Eusolex HMS. Detection limits from 0.8 to 6.0 nug/mL were obtained and the calibration plots were linear over at least one order of magnitude. The optimized method could be applied to the determination of Eusolex 6300 and Eusolex 9020 in a commercial suntan lotion. PMID:12210198

  10. Computation of maximum gust loads in nonlinear aircraft using a new method based on the matched filter approach and numerical optimization

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Heeg, Jennifer; Perry, Boyd, III

    1990-01-01

    Time-correlated gust loads are time histories of two or more load quantities due to the same disturbance time history. Time correlation provides knowledge of the value (magnitude and sign) of one load when another is maximum. At least two analysis methods have been identified that are capable of computing maximized time-correlated gust loads for linear aircraft. Both methods solve for the unit-energy gust profile (gust velocity as a function of time) that produces the maximum load at a given location on a linear airplane. Time-correlated gust loads are obtained by re-applying this gust profile to the airplane and computing multiple simultaneous load responses. Such time histories are physically realizable and may be applied to aircraft structures. Within the past several years there has been much interest in obtaining a practical analysis method which is capable of solving the analogous problem for nonlinear aircraft. Such an analysis method has been the focus of an international committee of gust loads specialists formed by the U.S. Federal Aviation Administration and was the topic of a panel discussion at the Gust and Buffet Loads session at the 1989 SDM Conference in Mobile, Alabama. The kinds of nonlinearities common on modern transport aircraft are indicated. The Statical Discrete Gust method is capable of being, but so far has not been, applied to nonlinear aircraft. To make the method practical for nonlinear applications, a search procedure is essential. Another method is based on Matched Filter Theory and, in its current form, is applicable to linear systems only. The purpose here is to present the status of an attempt to extend the matched filter approach to nonlinear systems. The extension uses Matched Filter Theory as a starting point and then employs a constrained optimization algorithm to attack the nonlinear problem.

  11. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip

    PubMed Central

    Davison, James A

    2015-01-01

    Purpose To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. Patients and methods A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Results Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. Conclusion A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer. PMID:26355695

  12. IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES, VOL. 31, NO. ] , JANUARY 1983 65 Optimized Waveguide E-Plane Metal Insert

    E-print Network

    Bornemann, Jens

    suitable for metal stamping and etching techniques are given for midband frequencies of about 15, 33, 63, is neglected. In this paper, similar to the fin-line filter calculation in [7], the design of optimized metal step in the optimization process is necessary, which reduces the involved computing time. Data for opti

  13. The "Blob" Filter: Gaussian Mixture Nonlinear Filtering with Re-Sampling for Mixand Narrowing

    E-print Network

    Psiaki, Mark L.

    The "Blob" Filter: Gaussian Mixture Nonlinear Filtering with Re-Sampling for Mixand Narrowing Mark-7501 Abstract--A new Gaussian mixture filter has been developed, one that uses a re-sampling step in order to limit the covariances of its individual Gaussian components. The new filter has been designed to produce

  14. The RAW Filter: An Improvement to the RobertAsselin Filter in Semi-Implicit Integrations

    E-print Network

    Williams, Paul

    The RAW Filter: An Improvement to the Robert­Asselin Filter in Semi-Implicit Integrations PAUL D time-stepping errors in leapfrog integrations, the Robert­Asselin­Williams (RAW) filter was proposed by the author as a simple improvement to the widely used Robert­Asselin (RA) filter. The present paper examines

  15. Labyrinth stepped seal geometric optimization 

    E-print Network

    Wernig, Marcus Daniel

    1995-01-01

    High-speed rotating machinery poses a challenging problem to designers and engineers. Interference between rotating and stationary elements can result in excessive wear, decreased machine performance, or machine failure. ...

  16. A step-by-step guide to systematically identify all relevant animal studies

    PubMed Central

    Leenaars, Marlies; Hooijmans, Carlijn R; van Veggel, Nieky; ter Riet, Gerben; Leeflang, Mariska; Hooft, Lotty; van der Wilt, Gert Jan; Tillema, Alice; Ritskes-Hoitinga, Merel

    2012-01-01

    Before starting a new animal experiment, thorough analysis of previously performed experiments is essential from a scientific as well as from an ethical point of view. The method that is most suitable to carry out such a thorough analysis of the literature is a systematic review (SR). An essential first step in an SR is to search and find all potentially relevant studies. It is important to include all available evidence in an SR to minimize bias and reduce hampered interpretation of experimental outcomes. Despite the recent development of search filters to find animal studies in PubMed and EMBASE, searching for all available animal studies remains a challenge. Available guidelines from the clinical field cannot be copied directly to the situation within animal research, and although there are plenty of books and courses on searching the literature, there is no compact guide available to search and find relevant animal studies. Therefore, in order to facilitate a structured, thorough and transparent search for animal studies (in both preclinical and fundamental science), an easy-to-use, step-by-step guide was prepared and optimized using feedback from scientists in the field of animal experimentation. The step-by-step guide will assist scientists in performing a comprehensive literature search and, consequently, improve the scientific quality of the resulting review and prevent unnecessary animal use in the future. PMID:22037056

  17. Analysis of plasticizers in poly(vinyl chloride) medical devices for infusion and artificial nutrition: comparison and optimization of the extraction procedures, a pre-migration test step.

    PubMed

    Bernard, Lise; Cueff, Régis; Bourdeaux, Daniel; Breysse, Colette; Sautou, Valérie

    2015-02-01

    Medical devices (MDs) for infusion and enteral and parenteral nutrition are essentially made of plasticized polyvinyl chloride (PVC). The first step in assessing patient exposure to these plasticizers, as well as ensuring that the MDs are free from di(2-ethylhexyl) phthalate (DEHP), consists of identifying and quantifying the plasticizers present and, consequently, determining which ones are likely to migrate into the patient's body. We compared three different extraction methods using 0.1 g of plasticized PVC: Soxhlet extraction in diethyl ether and ethyl acetate, polymer dissolution, and room temperature extraction in different solvents. It was found that simple room temperature chloroform extraction under optimized conditions (30 min, 50 mL) gave the best separation of plasticizers from the PVC matrix, with extraction yields ranging from 92 to 100% for all plasticizers. This result was confirmed by supplemented Fourier transform infrared spectroscopy-attenuated total reflection (FTIR-ATR) and gravimetric analyses. The technique was used on eight marketed medical devices and showed that they contained different amounts of plasticizers, ranging from 25 to 36% of the PVC weight. These yields, associated with the individual physicochemical properties of each plasticizer, highlight the need for further migration studies. PMID:25577357

  18. [Application of N-isopropyl-p-[123I] iodoamphetamine quantification of regional cerebral blood flow using iterative reconstruction methods: selection of the optimal reconstruction method and optimization of the cutoff frequency of the preprocessing filter].

    PubMed

    Asazu, Akira; Hayashi, Masuo; Arai, Mami; Kumai, Yoshiaki; Akagi, Hiroyuki; Okayama, Katsuyoshi; Narumi, Yoshifumi

    2013-05-01

    In cerebral blood flow tests using N-Isopropyl-p-[123I] Iodoamphetamine "I-IMP, quantitative results of greater accuracy than possible using the autoradiography (ARG) method can be obtained with attenuation and scatter correction and image reconstruction by filtered back projection (FBP). However, the cutoff frequency of the preprocessing Butterworth filter affects the quantitative value; hence, we sought an optimal cutoff frequency, derived from the correlation between the FBP method and Xenon-enhanced computed tomography (XeCT)/cerebral blood flow (CBF). In this study, we reconstructed images using ordered subsets expectation maximization (OSEM), a method of successive approximation which has recently come into wide use, and also three-dimensional (3D)-OSEM, a method by which the resolution can be corrected with the addition of collimator broad correction, to examine the effects on the regional cerebral blood flow (rCBF) quantitative value of changing the cutoff frequency, and to determine whether successive approximation is applicable to cerebral blood flow quantification. Our results showed that quantification of greater accuracy was obtained with reconstruction employing the 3D-OSEM method and using a cutoff frequency set near 0.75-0.85 cycles/cm, which is higher than the frequency used in image reconstruction by the ordinary FBP method. PMID:23964534

  19. Optimization of the performance of a thermophilic biotrickling filter for alpha-pinene removal from polluted air.

    PubMed

    Montes, M; Veiga, M C; Kennes, C

    2014-01-01

    Biodegradation of alpha-pinene was investigated in a biological thermophilic trickling filter, using a lava rock and polymer beads mixture as packing material. Partition coefficient (PC) between alpha-pinene and the polymeric material (Hytrel G3548 L) was measured at 50 degrees C. PCs of 57 and 846 were obtained between the polymer and either the water or the gas phase, respectively. BTF experiments were conducted under continuous load feeding. The effect of yeast extract (YE) addition in the recirculating nutrient medium was evaluated. There was a positive relationship between alpha-pinene biodegradation, CO2 production and YE addition. A maximum elimination capacity (ECmax) of 98.9 g m(-3) h(-1) was obtained for an alpha-pinene loading rate of about 121 g m(-3) h(-1) in the presence of 1 g L(-1) YE. The ECmax was reduced by half in the absence of YE. It was also found that a decrease in the liquid flow rate enhances alpha-pinene biodegradation by increasing the ECmax up to 103 gm(-3) h(-1) with a removal efficiency close to 90%. The impact of short-term shock-loads (6 h) was tested under different process conditions. Increasing the pollutant load either 10- or 20-fold resulted in a sudden drop in the BTF's removal capacity, although this effect was attenuated in the presence of YE. PMID:25145201

  20. Disk filter

    DOEpatents

    Bergman, Werner (Pleasanton, CA)

    1986-01-01

    An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.

  1. Disk filter

    DOEpatents

    Bergman, W.

    1985-01-09

    An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.

  2. Analysis and extensions of soft morphological filters

    NASA Astrophysics Data System (ADS)

    Kuosmanen, Pauli; Koskinen, Lasse; Astola, Jaakko T.

    1993-05-01

    In this paper, we analyze the deterministic and the statistical properties of soft morphological filters and their extensions. This analysis offers methods to design well performing soft morphological filters. We derive some detail preservation properties and study the noise attenuation properties of certain filters. Special attention is paid to the effect of varying parameters on the behavior of filters. Understanding these effects is essential in designing optimal filters.

  3. Water Filters

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Aquaspace H2OME Guardian Water Filter, available through Western Water International, Inc., reduces lead in water supplies. The filter is mounted on the faucet and the filter cartridge is placed in the "dead space" between sink and wall. This filter is one of several new filtration devices using the Aquaspace compound filter media, which combines company developed and NASA technology. Aquaspace filters are used in industrial, commercial, residential, and recreational environments as well as by developing nations where water is highly contaminated.

  4. Scale-up and optimization of an acoustic filter for 200 L/day perfusion of a CHO cell culture.

    PubMed

    Gorenflo, Volker M; Smith, Laura; Dedinsky, Bob; Persson, Bo; Piret, James M

    2002-11-20

    Acoustic cell retention devices have provided a practical alternative for up to 50 L/day perfusion cultures but further scale-up has been limited. A novel temperature-controlled and larger-scale acoustic separator was evaluated at up to 400 L/day for a 10(7) CHO cell/mL perfusion culture using a 100-L bioreactor that produced up to 34 g/day recombinant protein. The increased active volume of this scaled-up separator was divided into four parallel compartments for improved fluid dynamics. Operational settings of the acoustic separator were optimized and the limits of robust operations explored. The performance was not influenced over wide ranges of duty cycle stop and run times. The maximum performance of 96% separation efficiency at 200 L/day was obtained by setting the separator temperature to 35.1 degrees C, the recirculation rate to three times the harvest rate, and the power to 90 W. While there was no detectable effect on culture viability, viable cells were selectively retained, especially at 50 L/day, where there was a 5-fold higher nonviable washout efficiency. Overall, the new temperature-controlled and scaled-up separator design performed reliably in a way similar to smaller-scale acoustic separators. These results provide strong support for the feasibility of much greater scale-up of acoustic separations. PMID:12325152

  5. A Uniformly Convergent Adaptive Particle Filter Anastasia Papavasiliou

    E-print Network

    Del Moral , Pierre

    is asymptotically consistent and in addition, the optimal filter of the augmented system, i.e. the one where to compute the optimal filter. A common approach for dealing with unknown parameters in the system system, see [14]. In this paper, we discuss the problem of computing the optimal filter for the aug

  6. Biological Filters.

    ERIC Educational Resources Information Center

    Klemetson, S. L.

    1978-01-01

    Presents the 1978 literature review of wastewater treatment. The review is concerned with biological filters, and it covers: (1) trickling filters; (2) rotating biological contractors; and (3) miscellaneous reactors. A list of 14 references is also presented. (HM)

  7. Metallic Filters

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Filtration technology originated in a mid 1960's NASA study. The results were distributed to the filter industry, an HR Textron responded, using the study as a departure for the development of 421 Filter Media. The HR system is composed of ultrafine steel fibers metallurgically bonded and compressed so that the pore structure is locked in place. The filters are used to filter polyesters, plastics, to remove hydrocarbon streams, etc. Several major companies use the product in chemical applications, pollution control, etc.

  8. High-resolution wave-theory-based ultrasound reflection imaging using the split-step fourier and globally optimized fourier finite-difference methods

    DOEpatents

    Huang, Lianjie

    2013-10-29

    Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Results from various data input to the method indicate significant improvements are provided in both image quality and resolution.

  9. Kalman and Extended Kalman Filters: Concept, Derivation and Properties

    E-print Network

    Ribeiro,Isabel

    Kalman and Extended Kalman Filters: Concept, Derivation and Properties Maria Isabel Ribeiro for Gaussian Random Vectors . . . . . . . . . . 12 4 The Kalman Filter 14 4.1 Kalman Filter dynamics . . . . . . . . . . . . . . . . . . . . . . . 15 4.2 One-step ahead prediction dynamics . . . . . . . . . . . . . . . . 22 4.3 Kalman filter

  10. Stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  11. Filtering apparatus

    DOEpatents

    Haldipur, Gaurang B. (Monroeville, PA); Dilmore, William J. (Murrysville, PA)

    1992-01-01

    A vertical vessel having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas.

  12. Filtering apparatus

    DOEpatents

    Haldipur, G.B.; Dilmore, W.J.

    1992-09-01

    A vertical vessel is described having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas. 18 figs.

  13. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  14. Comparison of spatial domain optimal trade-off maximum average correlation height (OT-MACH) filter with scale invariant feature transform (SIFT) using images with poor contrast and large illumination gradient

    NASA Astrophysics Data System (ADS)

    Gardezi, A.; Qureshi, T.; Alkandri, A.; Young, R. C. D.; Birch, P. M.; Chatwin, C. R.

    2015-03-01

    A spatial domain optimal trade-off Maximum Average Correlation Height (OT-MACH) filter has been previously developed and shown to have advantages over frequency domain implementations in that it can be made locally adaptive to spatial variations in the input image background clutter and normalised for local intensity changes. In this paper we compare the performance of the spatial domain (SPOT-MACH) filter to the widely applied data driven technique known as the Scale Invariant Feature Transform (SIFT). The SPOT-MACH filter is shown to provide more robust recognition performance than the SIFT technique for demanding images such as scenes in which there are large illumination gradients. The SIFT method depends on reliable local edge-based feature detection over large regions of the image plane which is compromised in some of the demanding images we examined for this work. The disadvantage of the SPOTMACH filter is its numerically intensive nature since it is template based and is implemented in the spatial domain.

  15. New filter efficiency test for future nuclear grade HEPA filters

    SciTech Connect

    Bergman, W.; Foiles, L.; Mariner, C.; Kincy, M.

    1988-08-17

    We have developed a new test procedure for evaluating filter penetrations as low as 10/sup /minus/9/ at 0.1-..mu..m particle diameter. In comparison, the present US nuclear filter certification test has a lower penetration limit of 10/sup /minus/5/. Our new test procedure is unique not only in its much higher sensitivity, but also in avoiding the undesirable effect of clogging the filter. Our new test procedure consists of a two-step process: (1) We challenge the test filter with a very high concentration of heterodisperse aerosol for a short time while passing all or a significant portion of the filtered exhaust into an inflatable bag; (2) We then measure the aerosol concentration in the bag using a new laser particle counter sensitive to 0.07-..mu..m diameter. The ratio of particle concentration in the bag to the concentration challenging the filter gives the filter penetration as a function of particle diameter. The bad functions as a particle accumulator for subsequent analysis to minimize the filter exposure time. We have studied the particle losses in the bag over time and find that they are negligible when the measurements are taken within one hour. We also compared filter penetration measurements taken in the conventional direct-sampling method with the indirect bag-sampling method and found excellent agreement. 6 refs., 18 figs., 1 tab.

  16. Aquatic Plants Aid Sewage Filter

    NASA Technical Reports Server (NTRS)

    Wolverton, B. C.

    1985-01-01

    Method of wastewater treatment combines micro-organisms and aquatic plant roots in filter bed. Treatment occurs as liquid flows up through system. Micro-organisms, attached themselves to rocky base material of filter, act in several steps to decompose organic matter in wastewater. Vascular aquatic plants (typically, reeds, rushes, cattails, or water hyacinths) absorb nitrogen, phosphorus, other nutrients, and heavy metals from water through finely divided roots.

  17. Microfabrication of three-dimensional filters for liposome extrusion

    NASA Astrophysics Data System (ADS)

    Baldacchini, Tommaso; Nuñez, Vicente; LaFratta, Christopher N.; Grech, Joseph S.; Vullev, Valentine I.; Zadoyan, Ruben

    2015-03-01

    Liposomes play a relevant role in the biomedical field of drug delivery. The ability of these lipid vesicles to encapsulate and transport a variety of bioactive molecules has fostered their use in several therapeutic applications, from cancer treatments to the administration of drugs with antiviral activities. Size and uniformity are key parameters to take into consideration when preparing liposomes; these factors greatly influence their effectiveness in both in vitro and in vivo experiments. A popular technique employed to achieve the optimal liposome dimension (around 100 nm in diameter) and uniform size distribution is repetitive extrusion through a polycarbonate filter. We investigated two femtosecond laser direct writing techniques for the fabrication of three-dimensional filters within a microfluidics chip for liposomes extrusion. The miniaturization of the extrusion process in a microfluidic system is the first step toward a complete solution for lab-on-a-chip preparation of liposomes from vesicles self-assembly to optical characterization.

  18. Holographic Photopolymer Linear Variable Filter with Enhanced Blue Reflection

    PubMed Central

    2015-01-01

    A single beam one-step holographic interferometry method was developed to fabricate porous polymer structures with controllable pore size and location to produce compact graded photonic bandgap structures for linear variable optical filters. This technology is based on holographic polymer dispersed liquid crystal materials. By introducing a forced internal reflection, the optical reflection throughout the visible spectral region, from blue to red, is high and uniform. In addition, the control of the bandwidth of the reflection resonance, related to the light intensity and spatial porosity distributions, was investigated to optimize the optical performance. The development of portable and inexpensive personal health-care and environmental multispectral sensing/imaging devices will be possible using these filters. PMID:24517443

  19. Holographic photopolymer linear variable filter with enhanced blue reflection.

    PubMed

    Moein, Tania; Ji, Dengxin; Zeng, Xie; Liu, Ke; Gan, Qiaoqiang; Cartwright, Alexander N

    2014-03-12

    A single beam one-step holographic interferometry method was developed to fabricate porous polymer structures with controllable pore size and location to produce compact graded photonic bandgap structures for linear variable optical filters. This technology is based on holographic polymer dispersed liquid crystal materials. By introducing a forced internal reflection, the optical reflection throughout the visible spectral region, from blue to red, is high and uniform. In addition, the control of the bandwidth of the reflection resonance, related to the light intensity and spatial porosity distributions, was investigated to optimize the optical performance. The development of portable and inexpensive personal health-care and environmental multispectral sensing/imaging devices will be possible using these filters. PMID:24517443

  20. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Kim, S.

    2011-10-01

    Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC) methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP), is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF) and the sequential importance resampling (SIR) particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  1. Initial Ares I Bending Filter Design

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark

    2007-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.

  2. Stepped nozzle

    DOEpatents

    Sutton, G.P.

    1998-07-14

    An insert is described which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment. 5 figs.

  3. Stepped nozzle

    DOEpatents

    Sutton, George P. (Danville, CA)

    1998-01-01

    An insert which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment.

  4. Influence of multi-step heat treatments in creep age forming of 7075 aluminum alloy: Optimization for springback, strength and exfoliation corrosion

    SciTech Connect

    Arabi Jeshvaghani, R.; Zohdi, H.; Shahverdi, H.R.; Bozorg, M.; Hadavi, S.M.M.

    2012-11-15

    Multi-step heat treatments comprise of high temperature forming (150 Degree-Sign C/24 h plus 190 Degree-Sign C for several minutes) and subsequent low temperature forming (120 Degree-Sign C for 24 h) is developed in creep age forming of 7075 aluminum alloy to decrease springback and exfoliation corrosion susceptibility without reduction in tensile properties. The results show that the multi-step heat treatment gives the low springback and the best combination of exfoliation corrosion resistance and tensile strength. The lower springback is attributed to the dislocation recovery and more stress relaxation at higher temperature. Transmission electron microscopy observations show that corrosion resistance is improved due to the enlargement in the size and the inter-particle distance of the grain boundaries precipitates. Furthermore, the achievement of the high strength is related to the uniform distribution of ultrafine {eta} Prime precipitates within grains. - Highlights: Black-Right-Pointing-Pointer Creep age forming developed for manufacturing of aircraft wing panels by aluminum alloy. Black-Right-Pointing-Pointer A good combination of properties with minimal springback is required in this component. Black-Right-Pointing-Pointer This requirement can be improved through the appropriate heat treatments. Black-Right-Pointing-Pointer Multi-step cycles developed in creep age forming of AA7075 for improving of springback and properties. Black-Right-Pointing-Pointer Results indicate simultaneous enhancing the properties and shape accuracy (lower springback).

  5. Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking Harini Veeraraghavan Nikolaos Papanikolopoulos Paul Schrater

    E-print Network

    He, Sheng

    Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking Harini Veeraraghavan to the filtering step of the switching Kalman filter/smoother. The unscented transform is used to obtain a fixed-based switching Kalman filter (DS-SKS or UKS) and the standard switching Kalman filter/smoother (SKS

  6. Integrated electric alternators/active filters 

    E-print Network

    Abolhassani, Mehdi Towliat

    2004-09-30

    and experimental results are presented to demonstrate the effectiveness of the proposed IDEA. In next step, an integrated synchronous machine/active filter is discussed. The proposed technology is essentially a rotating synchronous machine with suitable...

  7. Tunable Imaging Filters in Astronomy

    E-print Network

    J. Bland-Hawthorn

    2000-06-05

    While tunable filters are a recent development in night time astronomy, they have long been used in other physical sciences, e.g. solar physics, remote sensing and underwater communications. With their ability to tune precisely to a given wavelength using a bandpass optimized for the experiment, tunable filters are already producing some of the deepest narrowband images to date of astrophysical sources. Furthermore, some classes of tunable filters can be used in fast telescope beams and therefore allow for narrowband imaging over angular fields of more than a degree over the sky.

  8. Deconvolution filtering: Temporal smoothing revisited

    PubMed Central

    Bush, Keith; Cisler, Josh

    2014-01-01

    Inferences made from analysis of BOLD data regarding neural processes are potentially confounded by multiple competing sources: cardiac and respiratory signals, thermal effects, scanner drift, and motion-induced signal intensity changes. To address this problem, we propose deconvolution filtering, a process of systematically deconvolving and reconvolving the BOLD signal via the hemodynamic response function such that the resultant signal is composed of maximally likely neural and neurovascular signals. To test the validity of this approach, we compared the accuracy of BOLD signal variants (i.e., unfiltered, deconvolution filtered, band-pass filtered, and optimized band-pass filtered BOLD signals) in identifying useful properties of highly confounded, simulated BOLD data: (1) reconstructing the true, unconfounded BOLD signal, (2) correlation with the true, unconfounded BOLD signal, and (3) reconstructing the true functional connectivity of a three-node neural system. We also tested this approach by detecting task activation in BOLD data recorded from healthy adolescent girls (control) during an emotion processing task. Results for the estimation of functional connectivity of simulated BOLD data demonstrated that analysis (via standard estimation methods) using deconvolution filtered BOLD data achieved superior performance to analysis performed using unfiltered BOLD data and was statistically similar to well-tuned band-pass filtered BOLD data. Contrary to band-pass filtering, however, deconvolution filtering is built upon physiological arguments and has the potential, at low TR, to match the performance of an optimal band-pass filter. The results from task estimation on real BOLD data suggest that deconvolution filtering provides superior or equivalent detection of task activations relative to comparable analyses on unfiltered signals and also provides decreased variance over the estimate. In turn, these results suggest that standard preprocessing of the BOLD signal ignores significant sources of noise that can be effectively removed without damaging the underlying signal. PMID:24768215

  9. Filters for High Rate Pulse Processing

    E-print Network

    B. K. Alpert; R. D. Horansky; D. A. Bennett; W. B. Doriese; J. W. Fowler; A. S. Hoover; M. W. Rabin; J. N. Ullom

    2012-12-07

    We introduce a filter-construction method for pulse processing that differs in two respects from that in standard optimal filtering, in which the average pulse shape and noise-power spectral density are combined to create a convolution filter for estimating pulse heights. First, the proposed filters are computed in the time domain, to avoid periodicity artifacts of the discrete Fourier transform, and second, orthogonality constraints are imposed on the filters, to reduce the filtering procedure's sensitivity to unknown baseline height and pulse tails. We analyze the proposed filters, predicting energy resolution under several scenarios, and apply the filters to high-rate pulse data from gamma-rays measured by a transition-edge-sensor microcalorimeter.

  10. Testing Dual Rotary Filters - 12373

    SciTech Connect

    Herman, D.T.; Fowley, M.D.; Stefanko, D.B.; Shedd, D.A.; Houchens, C.L.

    2012-07-01

    The Savannah River National Laboratory (SRNL) installed and tested two hydraulically connected SpinTek{sup R} Rotary Micro-filter units to determine the behavior of a multiple filter system and develop a multi-filter automated control scheme. Developing and testing the control of multiple filters was the next step in the development of the rotary filter for deployment. The test stand was assembled using as much of the hardware planned for use in the field including instrumentation and valving. The control scheme developed will serve as the basis for the scheme used in deployment. The multi filter setup was controlled via an Emerson DeltaV control system running version 10.3 software. Emerson model MD controllers were installed to run the control algorithms developed during this test. Savannah River Remediation (SRR) Process Control Engineering personnel developed the software used to operate the process test model. While a variety of control schemes were tested, two primary algorithms provided extremely stable control as well as significant resistance to process upsets that could lead to equipment interlock conditions. The control system was tuned to provide satisfactory response to changing conditions during the operation of the multi-filter system. Stability was maintained through the startup and shutdown of one of the filter units while the second was still in operation. The equipment selected for deployment, including the concentrate discharge control valve, the pressure transmitters, and flow meters, performed well. Automation of the valve control integrated well with the control scheme and when used in concert with the other control variables, allowed automated control of the dual rotary filter system. Experience acquired on a multi-filter system behavior and with the system layout during this test helped to identify areas where the current deployment rotary filter installation design could be improved. Completion of this testing provides the necessary information on the control and system behavior that will be used in deployment on actual waste. (authors)

  11. Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

    NASA Astrophysics Data System (ADS)

    Bruno, Marcelo G. S.; Dias, Stiven S.

    2014-12-01

    We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

  12. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  13. A Simple Methodological Approach for Counting and Identifying Culturable Viruses Adsorbed to Cellulose Nitrate Membrane Filters

    PubMed Central

    Papageorgiou, Georgios T.; Mocé-Llivina, Laura; Christodoulou, Christina G.; Lucena, Francisco; Akkelidou, Dina; Ioannou, Eleni; Jofre, Juan

    2000-01-01

    We identified conditions under which Buffalo green monkey cells grew on the surfaces of cellulose nitrate membrane filters in such a way that they covered the entire surface of each filter and penetrated through the pores. When such conditions were used, poliovirus that had previously been adsorbed on the membranes infected the cells and replicated. A plaque assay method and a quantal method (most probable number of cytopathic units) were used to detect and count the viruses adsorbed on the membrane filters. Polioviruses in aqueous suspensions were then concentrated by adsorption to cellulose membrane filters and were subsequently counted without elution, a step which is necessary when the commonly used methods are employed. The pore size of the membrane filter, the sample contents, and the sample volume were optimized for tap water, seawater, and a 0.25 M glycine buffer solution. The numbers of viruses recovered under the optimized conditions were more than 50% greater than the numbers counted by the standard plaque assay. When ceftazidime was added to the assay medium in addition to the antibiotics which are typically used, the method could be used to study natural samples with low and intermediate levels of microbial pollution without decontamination of the samples. This methodological approach also allowed plaque hybridization either directly on cellulose nitrate membranes or on Hybond N+ membranes after the preparations were transferred. PMID:10618223

  14. Discrete-time filtering of linear continuous-time processes

    NASA Astrophysics Data System (ADS)

    Shats, Samuel

    1989-06-01

    Continuous-time measurements are prefiltered before sampling, to remove additive white noise. The discrete-time optimal filter comprises a digital algorithm which is applied to the prefiltered, sampled measurements; the algorithm is based on the discrete-time equivalent model of the overall system. For the case of an integrate-and-dump analog prefilter, a discrete-time equivalent model was developed and the corresponding optimal filter was found for the general case, where the continuous-time measurement and process noise signals are correlated. A commonly used approximate discrete-time model was analyzed by defining and evaluating the true-error-covariance matrix of the estimate, and comparing it with the supposed error covariance matrix. It was shown that there is a class of unstable processes for which the former error covariance matrix attains unbounded norm, in spite of the continuing bounded nature of the other error covariance matrix. The main part of the thesis concerns the problem of finding an optimal prefilter. The steps of obtaining the optimal prefilter comprise: deriving a discrete-time equivalent-model of the overall system; finding the equation which is satisfied by the error covariance matrix; deriving the expressions which are satisfied by the first coefficients of the Maclaurin expansions of the error covariance matrix in the small parameter T; and obtaining the optimal prefilter by matrix optimization. The results obtained indicate that the optimal prefilter may be implemented through systems of different orders; the minimum order required is discussed, which is of great practical importance as the simplest possible prefilter. In discussion of the problem of discrete-time quadratic regulation of linear continuous time processes, the case of practical interest, where a zero-order hold is part of the digital-to-analog converter, is considered. It is shown that the duality between the regulation and filtering problems is not conserved after discretization when an integrate-and-dump prefilter is used. Analysis of a specific model shows that the results obtained in the regulation problem are completely different from those obtained in the filtering problem.

  15. The optimization of essential oils supercritical CO2 extraction from Lavandula hybrida through static-dynamic steps procedure and semi-continuous technique using response surface method

    PubMed Central

    Kamali, Hossein; Aminimoghadamfarouj, Noushin; Golmakani, Ebrahim; Nematollahi, Alireza

    2015-01-01

    Aim: The aim of this study was to examine and evaluate crucial variables in essential oils extraction process from Lavandula hybrida through static-dynamic and semi-continuous techniques using response surface method. Materials and Methods: Essential oil components were extracted from Lavandula hybrida (Lavandin) flowers using supercritical carbon dioxide via static-dynamic steps (SDS) procedure, and semi-continuous (SC) technique. Results: Using response surface method the optimum extraction yield (4.768%) was obtained via SDS at 108.7 bar, 48.5°C, 120 min (static: 8×15), 24 min (dynamic: 8×3 min) in contrast to the 4.620% extraction yield for the SC at 111.6 bar, 49.2°C, 14 min (static), 121.1 min (dynamic). Conclusion: The results indicated that a substantial reduction (81.56%) solvent usage (kg CO2/g oil) is observed in the SDS method versus the conventional SC method. PMID:25598636

  16. The Lockheed alternate partial polarizer universal filter

    NASA Technical Reports Server (NTRS)

    Title, A. M.

    1976-01-01

    A tunable birefringent filter using an alternate partial polarizer design has been built. The filter has a transmission of 38% in polarized light. Its full width at half maximum is .09A at 5500A. It is tunable from 4500 to 8500A by means of stepping motor actuated rotating half wave plates and polarizers. Wave length commands and thermal compensation commands are generated by a PPD 11/10 minicomputer. The alternate partial polarizer universal filter is compared with the universal birefringent filter and the design techniques, construction methods, and filter performance are discussed in some detail. Based on the experience of this filter some conclusions regarding the future of birefringent filters are elaborated.

  17. Filter apparatus

    DOEpatents

    Kuban, Daniel P. (Oak Ridge, TN); Singletary, B. Huston (Oak Ridge, TN); Evans, John H. (Rockwood, TN)

    1984-01-01

    A plurality of holding tubes are respectively mounted in apertures in a partition plate fixed in a housing receiving gas contaminated with particulate material. A filter cartridge is removably held in each holding tube, and the cartridges and holding tubes are arranged so that gas passes through apertures therein and across the partition plate while particulate material is collected in the cartridges. Replacement filter cartridges are respectively held in holding canisters mounted on a support plate which can be secured to the aforesaid housing, and screws mounted on said canisters are arranged to push replacement cartridges into the cartridge holding tubes and thereby eject used cartridges therefrom.

  18. Water Filters

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Seeking to find a more effective method of filtering potable water that was highly contaminated, Mike Pedersen, founder of Western Water International, learned that NASA had conducted extensive research in methods of purifying water on board manned spacecraft. The key is Aquaspace Compound, a proprietary WWI formula that scientifically blends various types of glandular activated charcoal with other active and inert ingredients. Aquaspace systems remove some substances; chlorine, by atomic adsorption, other types of organic chemicals by mechanical filtration and still others by catalytic reaction. Aquaspace filters are finding wide acceptance in industrial, commercial, residential and recreational applications in the U.S. and abroad.

  19. Step Detection in Single-Molecule Real Time Trajectories Embedded in Correlated Noise

    PubMed Central

    Arunajadai, Srikesh G.; Cheng, Wei

    2013-01-01

    Single-molecule real time trajectories are embedded in high noise. To extract kinetic or dynamic information of the molecules from these trajectories often requires idealization of the data in steps and dwells. One major premise behind the existing single-molecule data analysis algorithms is the Gaussian ‘white’ noise, which displays no correlation in time and whose amplitude is independent on data sampling frequency. This so-called ‘white’ noise is widely assumed but its validity has not been critically evaluated. We show that correlated noise exists in single-molecule real time trajectories collected from optical tweezers. The assumption of white noise during analysis of these data can lead to serious over- or underestimation of the number of steps depending on the algorithms employed. We present a statistical method that quantitatively evaluates the structure of the underlying noise, takes the noise structure into account, and identifies steps and dwells in a single-molecule trajectory. Unlike existing data analysis algorithms, this method uses Generalized Least Squares (GLS) to detect steps and dwells. Under the GLS framework, the optimal number of steps is chosen using model selection criteria such as Bayesian Information Criterion (BIC). Comparison with existing step detection algorithms showed that this GLS method can detect step locations with highest accuracy in the presence of correlated noise. Because this method is automated, and directly works with high bandwidth data without pre-filtering or assumption of Gaussian noise, it may be broadly useful for analysis of single-molecule real time trajectories. PMID:23533612

  20. Volterra filters for quantum estimation and detection

    E-print Network

    Mankei Tsang

    2015-12-14

    The implementation of optimal statistical inference protocols for high-dimensional quantum systems is often computationally expensive. To avoid the difficulties associated with optimal techniques, here I propose an alternative approach to quantum estimation and detection based on Volterra filters. Volterra filters have a clear hierarchy of computational complexities and performances, depend only on finite-order correlation functions, and are applicable to systems with no simple Markovian model. These features make Volterra filters appealing alternatives to optimal nonlinear protocols for the inference and control of complex quantum systems. Applications of the first-order Volterra filter to continuous-time quantum filtering, the derivation of a Heisenberg-picture uncertainty relation, quantum state tomography, and qubit readout are discussed.

  1. Spatial filter decomposition for interference mitigation

    NASA Astrophysics Data System (ADS)

    Maoudj, Rabah; Terre, Michel; Fety, Luc; Alexandre, Christophe; Mege, Philippe

    2014-12-01

    This paper presents a two-part decomposition of a spatial filter having to optimize the reception of a useful signal in the presence of an important co-channel interference level. The decomposition highlights the role of two parts of the filter, one devoted to the maximization of the signal to noise ratio and the other devoted to the interference cancellation. The two-part decomposition is used in the estimation process of the optimal reception filter. We propose then an estimation algorithm that follows this decomposition, and the global spatial filter is finally obtained through an optimal-weighted combination of two filters. It is shown that this two-component-based decomposition algorithm overcomes other previously published solutions involving eigenvalue decompositions.

  2. InBox: Filtering Employee Create Filter

    E-print Network

    Fernandez, Eduardo

    InBox: Filtering Employee 1 Create Filter An Inbox filter enables you to limit the action items you see in your Inbox. You can create a personal Inbox filter that is available only to you in your Inbox. You can define an Inbox filter for specific or all business processes, and then define conditions

  3. The intractable cigarette ‘filter problem’

    PubMed Central

    2011-01-01

    Background When lung cancer fears emerged in the 1950s, cigarette companies initiated a shift in cigarette design from unfiltered to filtered cigarettes. Both the ineffectiveness of cigarette filters and the tobacco industry's misleading marketing of the benefits of filtered cigarettes have been well documented. However, during the 1950s and 1960s, American cigarette companies spent millions of dollars to solve what the industry identified as the ‘filter problem’. These extensive filter research and development efforts suggest a phase of genuine optimism among cigarette designers that cigarette filters could be engineered to mitigate the health hazards of smoking. Objective This paper explores the early history of cigarette filter research and development in order to elucidate why and when seemingly sincere filter engineering efforts devolved into manipulations in cigarette design to sustain cigarette marketing and mitigate consumers' concerns about the health consequences of smoking. Methods Relevant word and phrase searches were conducted in the Legacy Tobacco Documents Library online database, Google Patents, and media and medical databases including ProQuest, JSTOR, Medline and PubMed. Results 13 tobacco industry documents were identified that track prominent developments involved in what the industry referred to as the ‘filter problem’. These reveal a period of intense focus on the ‘filter problem’ that persisted from the mid-1950s to the mid-1960s, featuring collaborations between cigarette producers and large American chemical and textile companies to develop effective filters. In addition, the documents reveal how cigarette filter researchers' growing scientific knowledge of smoke chemistry led to increasing recognition that filters were unlikely to offer significant health protection. One of the primary concerns of cigarette producers was to design cigarette filters that could be economically incorporated into the massive scale of cigarette production. The synthetic plastic cellulose acetate became the fundamental cigarette filter material. By the mid-1960s, the meaning of the phrase ‘filter problem’ changed, such that the effort to develop effective filters became a campaign to market cigarette designs that would sustain the myth of cigarette filter efficacy. Conclusions This study indicates that cigarette designers at Philip Morris, British-American Tobacco, Lorillard and other companies believed for a time that they might be able to reduce some of the most dangerous substances in mainstream smoke through advanced engineering of filter tips. In their attempts to accomplish this, they developed the now ubiquitous cellulose acetate cigarette filter. By the mid-1960s cigarette designers realised that the intractability of the ‘filter problem’ derived from a simple fact: that which is harmful in mainstream smoke and that which provides the smoker with ‘satisfaction’ are essentially one and the same. Only in the wake of this realisation did the agenda of cigarette designers appear to transition away from mitigating the health hazards of smoking and towards the perpetuation of the notion that cigarette filters are effective in reducing these hazards. Filters became a marketing tool, designed to keep and recruit smokers as consumers of these hazardous products. PMID:21504917

  4. Road tracing by profile matching and Kalman filtering

    E-print Network

    Vosselman, George

    Road tracing by profile matching and Kalman filtering George Vosselman 1 and Jurrien de Knecht 2 1 parameters are estimated by the recursive Kalman filter. By utilising the prediction step of the Kalman and the Kalman filter. In section 2 we will outline the motivation for this using least squares matching

  5. Solution of two-dimensional electromagnetic scattering problem by FDTD with optimal step size, based on a semi-norm analysis

    SciTech Connect

    Monsefi, Farid; Carlsson, Linus; Silvestrov, Sergei; Ran?i?, Milica; Otterskog, Magnus

    2014-12-10

    To solve the electromagnetic scattering problem in two dimensions, the Finite Difference Time Domain (FDTD) method is used. The order of convergence of the FDTD algorithm, solving the two-dimensional Maxwell’s curl equations, is estimated in two different computer implementations: with and without an obstacle in the numerical domain of the FDTD scheme. This constitutes an electromagnetic scattering problem where a lumped sinusoidal current source, as a source of electromagnetic radiation, is included inside the boundary. Confined within the boundary, a specific kind of Absorbing Boundary Condition (ABC) is chosen and the outside of the boundary is in form of a Perfect Electric Conducting (PEC) surface. Inserted in the computer implementation, a semi-norm has been applied to compare different step sizes in the FDTD scheme. First, the domain of the problem is chosen to be the free-space without any obstacles. In the second part of the computer implementations, a PEC surface is included as the obstacle. The numerical instability of the algorithms can be rather easily avoided with respect to the Courant stability condition, which is frequently used in applying the general FDTD algorithm.

  6. Phosphorus Filter

    USGS Multimedia Gallery

    Tom Kehler, fishery biologist at the U.S. Fish and Wildlife Service's Northeast Fishery Center in Lamar, Pennsylvania, checks the flow rate of water leaving a phosphorus filter column. The USGS has pioneered a new use for acid mine drainage residuals that are currently a disposal challenge, usi...

  7. Optimal cytoreductive surgery for underlying ovarian cancer associated with deep venous thrombosis without placement of inferior vena cava filter: A case report and literature review

    PubMed Central

    SHEN, HONGWEI; SHANG, JIANHONG; NIU, GANG; LIU, JUN; YOU, ZESHAN; HE, SHANYANG

    2015-01-01

    Ovarian cancer associated with deep venous thrombosis (DVT) is an uncommon, potentially life-threatening condition. The primary therapeutic strategy for the treatment of this condition is up-front primary cytoreductive surgery, with placement of an inferior vena cava (IVC) filter prior to surgery to prevent fatal pulmonary embolism. The present study describes the case of a 49-year-old female, who presented with DVT unresponsive to anticoagulant therapy in the lower extremities prior to the diagnosis of ovarian cancer. During the search for the underlying malignancy, transvaginal sonography (TVS) revealed a cystic solid mass in the pelvic cavity. Subsequently, the patient underwent up-front primary cytoreductive surgery without placement of a preoperative IVC filter, followed by six cycles of chemotherapy. The patient was diagnosed with ovarian clear cell adenocarcinoma stage IIIC, complicated by DVT, and had survived >3 years without relapse at the time of completion of the present study. The successful outcome of the present case demonstrated that occult primary cancer should be suspected in patients with DVT unresponsive to anticoagulant therapy. The present study also indicated that up-front primary cytoreductive surgery without placement of an IVC filter represents an effective potential strategy for the treatment of advanced ovarian cancer associated with DVT, as the thrombus strongly adheres to the vessel wall following organization. PMID:26622893

  8. Sub-wavelength efficient polarization filter (SWEP filter)

    DOEpatents

    Simpson, Marcus L.; Simpson, John T.

    2003-12-09

    A polarization sensitive filter includes a first sub-wavelength resonant grating structure (SWS) for receiving incident light, and a second SWS. The SWS are disposed relative to one another such that incident light which is transmitted by the first SWS passes through the second SWS. The filter has a polarization sensitive resonance, the polarization sensitive resonance substantially reflecting a first polarization component of incident light while substantially transmitting a second polarization component of the incident light, the polarization components being orthogonal to one another. A method for forming polarization filters includes the steps of forming first and second SWS, the first and second SWS disposed relative to one another such that a portion of incident light applied to the first SWS passes through the second SWS. A method for separating polarizations of light, includes the steps of providing a filter formed from a first and second SWS, shining incident light having orthogonal polarization components on the first SWS, and substantially reflecting one of the orthogonal polarization components while substantially transmitting the other orthogonal polarization component. A high Q narrowband filter includes a first and second SWS, the first and second SWS are spaced apart a distance being at least one half an optical wavelength.

  9. Nonlinear Filtering with Fractional Brownian Motion

    SciTech Connect

    Amirdjanova, A.

    2002-12-19

    Our objective is to study a nonlinear filtering problem for the observation process perturbed by a Fractional Brownian Motion (FBM) with Hurst index 1/2 optimal filter is derived.

  10. Organic solvent-free air-assisted liquid-liquid microextraction for optimized extraction of illegal azo-based dyes and their main metabolite from spices, cosmetics and human bio-fluid samples in one step.

    PubMed

    Barfi, Behruz; Asghari, Alireza; Rajabi, Maryam; Sabzalian, Sedigheh

    2015-08-15

    Air-assisted liquid-liquid microextraction (AALLME) has unique capabilities to develop as an organic solvent-free and one-step microextraction method, applying ionic-liquids as extraction solvent and avoiding centrifugation step. Herein, a novel and simple eco-friendly method, termed one-step air-assisted liquid-liquid microextraction (OS-AALLME), was developed to extract some illegal azo-based dyes (including Sudan I to IV, and Orange G) from food and cosmetic products. A series of experiments were investigated to achieve the most favorable conditions (including extraction solvent: 77?L of 1-Hexyl-3-methylimidazolium hexafluorophosphate; sample pH 6.3, without salt addition; and extraction cycles: 25 during 100s of sonication) using a central composite design strategy. Under these conditions, limits of detection, linear dynamic ranges, enrichment factors and consumptive indices were in the range of 3.9-84.8ngmL(-1), 0.013-3.1?gmL(-1), 33-39, and 0.13-0.15, respectively. The results showed that -as well as its simplicity, fastness, and use of no hazardous disperser and extraction solvents- OS-AALLME is an enough sensitive and efficient method for the extraction of these dyes from complex matrices. After optimization and validation, OS-AALLME was applied to estimate the concentration of 1-amino-2-naphthol in human bio-fluids as a main reductive metabolite of selected dyes. Levels of 1-amino-2-naphthol in plasma and urinary excretion suggested that this compound may be used as a new potential biomarker of these dyes in human body. PMID:26149246

  11. TU-C-BRE-11: 3D EPID-Based in Vivo Dosimetry: A Major Step Forward Towards Optimal Quality and Safety in Radiation Oncology Practice

    SciTech Connect

    Mijnheer, B; Mans, A; Olaciregui-Ruiz, I; Rozendaal, R; Spreeuw, H; Herk, M van

    2014-06-15

    Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is done offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.

  12. Water Filter

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A compact, lightweight electrolytic water sterilizer available through Ambassador Marketing, generates silver ions in concentrations of 50 to 100 parts per billion in water flow system. The silver ions serve as an effective bactericide/deodorizer. Tap water passes through filtering element of silver that has been chemically plated onto activated carbon. The silver inhibits bacterial growth and the activated carbon removes objectionable tastes and odors caused by addition of chlorine and other chemicals in municipal water supply. The three models available are a kitchen unit, a "Tourister" unit for portable use while traveling and a refrigerator unit that attaches to the ice cube water line. A filter will treat 5,000 to 10,000 gallons of water.

  13. Smoothing filter for digital to analog conversion

    NASA Technical Reports Server (NTRS)

    Wagner, C. A. (inventor)

    1981-01-01

    An electronic filter comprised of three active filter sections to smooth the stepped signal from a digital to analog converter is described. The first section has a noninverting low pass filter transfer function, and the second has an inverting transfer function designed to pass a narrow frequency band centered at the step frequency of the stepped output signal with sharp cutoff of either side of that narrow band. The third section adds the noninverted output of the first section to the inverted output of the second section. This third section has a lead-lag transfer function designed to reduce the phase angle between the signal at its output terminal and the stepped signal at the input of the first section.

  14. Eyeglass Filters

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Biomedical Optical Company of America's suntiger lenses eliminate more than 99% of harmful light wavelengths. NASA derived lenses make scenes more vivid in color and also increase the wearer's visual acuity. Distant objects, even on hazy days, appear crisp and clear; mountains seem closer, glare is greatly reduced, clouds stand out. Daytime use protects the retina from bleaching in bright light, thus improving night vision. Filtering helps prevent a variety of eye disorders, in particular cataracts and age related macular degeneration.

  15. Modified Kalman Filter Based Method for Training State-

    E-print Network

    Slatton, Clint

    6/4/2003 1 Modified Kalman Filter Based Method for Training State- Recurrent Multilayer Perceptrons. Demanding computation and storage requirements Gradients decay exponentially #12;6/4/2003 4 Kalman Filter the optimal weights of an RNN. Faster convergence than BPTT Increased Computational Complexity - Kalman filter

  16. Birefringent filter design by use of a modified genetic algorithm

    E-print Network

    Yao, Jianping

    Birefringent filter design by use of a modified genetic algorithm Mengtao Wen and Jianping Yao A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation of the filters. Being different from the normal genetic algorithm, the algorithm proposed reduces the problem

  17. Series expansions of Brownian motion and the unscented particle filter

    E-print Network

    Edinburgh, University of

    Series expansions of Brownian motion and the unscented particle filter October 15, 2013 Abstract The discrete-time filtering problem for nonlinear diffusion processes is computationally intractable in general. For this reason, methods such as the bootstrap filter are particularly effective at approximating the optimal

  18. Approximate distributed Kalman filtering for cooperative multi-agent

    E-print Network

    Hespanha, João Pedro

    Approximate distributed Kalman filtering for cooperative multi-agent localization Prabir Barooah1 an algorithm that computes an approximation of the central- ized optimal (Kalman filter) estimates with nearby agents. The problem of distributed Kalman filtering for this application is reformulated

  19. Rocket noise filtering system using digital filters

    NASA Technical Reports Server (NTRS)

    Mauritzen, David

    1990-01-01

    A set of digital filters is designed to filter rocket noise to various bandwidths. The filters are designed to have constant group delay and are implemented in software on a general purpose computer. The Parks-McClellan algorithm is used. Preliminary tests are performed to verify the design and implementation. An analog filter which was previously employed is also simulated.

  20. Non-linear filtering Example: Median filter

    E-print Network

    Oliensis, John

    Non-linear filtering · Example: Median filter · Replaces pixel value by median value over neighborhood · Generates no new gray levels I=(1 2 3 2 3 2 1) 2 22 3 2 #12;Median filters Advantage (?): the "odd-man-out" effect e.g. 1,1,1,7,1,1,1,1 ?,1,1,1.1,1,1,? #12;Median filters: example filter width = 5

  1. Optical ranked-order filtering using threshold decomposition

    SciTech Connect

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1990-08-14

    This patent describes a hybrid optical/electronic system. It performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  2. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  3. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P. (West Lafayette, IN); Ochoa, Ellen (Pleasanton, CA); Sweeney, Donald W. (Alamo, CA)

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  4. Quantum neural network-based EEG filtering for a brain-computer interface.

    PubMed

    Gandhi, Vaibhav; Prasad, Girijesh; Coyle, Damien; Behera, Laxmidhar; McGinnity, Thomas Martin

    2014-02-01

    A novel neural information processing architecture inspired by quantum mechanics and incorporating the well-known Schrodinger wave equation is proposed in this paper. The proposed architecture referred to as recurrent quantum neural network (RQNN) can characterize a nonstationary stochastic signal as time-varying wave packets. A robust unsupervised learning algorithm enables the RQNN to effectively capture the statistical behavior of the input signal and facilitates the estimation of signal embedded in noise with unknown characteristics. The results from a number of benchmark tests show that simple signals such as dc, staircase dc, and sinusoidal signals embedded within high noise can be accurately filtered and particle swarm optimization can be employed to select model parameters. The RQNN filtering procedure is applied in a two-class motor imagery-based brain-computer interface where the objective was to filter electroencephalogram (EEG) signals before feature extraction and classification to increase signal separability. A two-step inner-outer fivefold cross-validation approach is utilized to select the algorithm parameters subject-specifically for nine subjects. It is shown that the subject-specific RQNN EEG filtering significantly improves brain-computer interface performance compared to using only the raw EEG or Savitzky-Golay filtered EEG across multiple sessions. PMID:24807028

  5. ADVANCED HOT GAS FILTER DEVELOPMENT

    SciTech Connect

    E.S. Connolly; G.D. Forsythe

    2000-09-30

    DuPont Lanxide Composites, Inc. undertook a sixty-month program, under DOE Contract DEAC21-94MC31214, in order to develop hot gas candle filters from a patented material technology know as PRD-66. The goal of this program was to extend the development of this material as a filter element and fully assess the capability of this technology to meet the needs of Pressurized Fluidized Bed Combustion (PFBC) and Integrated Gasification Combined Cycle (IGCC) power generation systems at commercial scale. The principal objective of Task 3 was to build on the initial PRD-66 filter development, optimize its structure, and evaluate basic material properties relevant to the hot gas filter application. Initially, this consisted of an evaluation of an advanced filament-wound core structure that had been designed to produce an effective bulk filter underneath the barrier filter formed by the outer membrane. The basic material properties to be evaluated (as established by the DOE/METC materials working group) would include mechanical, thermal, and fracture toughness parameters for both new and used material, for the purpose of building a material database consistent with what is being done for the alternative candle filter systems. Task 3 was later expanded to include analysis of PRD-66 candle filters, which had been exposed to actual PFBC conditions, development of an improved membrane, and installation of equipment necessary for the processing of a modified composition. Task 4 would address essential technical issues involving the scale-up of PRD-66 candle filter manufacturing from prototype production to commercial scale manufacturing. The focus would be on capacity (as it affects the ability to deliver commercial order quantities), process specification (as it affects yields, quality, and costs), and manufacturing systems (e.g. QA/QC, materials handling, parts flow, and cost data acquisition). Any filters fabricated during this task would be used for product qualification tests being conducted by Westinghouse at Foster-Wheeler's Pressurized Circulating Fluidized Bed (PCFBC) test facility in Karhula, Finland. Task 5 was designed to demonstrate the improvements implemented in Task 4 by fabricating fifty 1.5-meter hot gas filters. These filters were to be made available for DOE-sponsored field trials at the Power Systems Development Facility (PSDF), operated by Southern Company Services in Wilsonville, Alabama.

  6. 3D early embryogenesis image filtering by nonlinear partial differential equations.

    PubMed

    Krivá, Z; Mikula, K; Peyriéras, N; Rizzi, B; Sarti, A; Stasová, O

    2010-08-01

    We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which are artificially connected due to acquisition error intrinsically linked to physics of LSM. In all studied aspects it turned out that the nonlinear diffusion filter which is called geodesic mean curvature flow (GMCF) has the best performance. PMID:20457535

  7. Recursive Implementations of the Schmidt-Kalman `Consider' Filter

    NASA Astrophysics Data System (ADS)

    Zanetti, Renato; D'Souza, Christopher

    2015-11-01

    One method to account for parameters errors in the Kalman filter is to `consider' their effect in the so-called Schmidt-Kalman filter. This paper addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU Schmidt-Kalman filter is proposed. The non-optimality of the recursive Schmidt-Kalman filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  8. Evidence-Based Used, Yet Still Controversial: The Arterial Filter

    PubMed Central

    Somer, Filip De

    2012-01-01

    Abstract: Arterial line filters are considered by many as an essential safety measure inside a cardiopulmonary bypass circuit. There is no doubt that this was true during the bubble oxygenator era, but we can question whether the existing arterial line filter design and positioning of the filter are still optimal seeing the tremendous progress in cardiopulmonary bypass circuit components. This overview gives a critical overview of existing arterial line filter design. PMID:22730869

  9. An online novel adaptive filter for denoising time series measurements.

    PubMed

    Willis, Andrew J

    2006-04-01

    A nonstationary form of the Wiener filter based on a principal components analysis is described for filtering time series data possibly derived from noisy instrumentation. The theory of the filter is developed, implementation details are presented and two examples are given. The filter operates online, approximating the maximum a posteriori optimal Bayes reconstruction of a signal with arbitrarily distributed and non stationary statistics. PMID:16649562

  10. Neural Filters for Jet Analysis

    E-print Network

    Dawei W Dong; Miklos Gyulassy

    1993-05-09

    We study the efficiency of a neural-net filter and deconvolution method for estimating jet energies and spectra in high-background reactions such as nuclear collisions at the relativistic heavy-ion collider and the large hadron collider. The optimal network is shown to be surprisingly close but not identical to a linear high-pass filter. A suitably constrained deconvolution method is shown to uncover accurately the underlying jet distribution in spite of the broad network response. Finally, we show that possible changes of the jet spectrum in nuclear collisions can be analyzed quantitatively, in terms of an effective energy loss with the proposed method. {} {Dong D W and Gyulassy M 1993}{Neural filters for jet analysis} {(LBL-31560) Physical Review E Vol~47(4) pp~2913-2922}

  11. SU-E-I-62: Assessing Radiation Dose Reduction and CT Image Optimization Through the Measurement and Analysis of the Detector Quantum Efficiency (DQE) of CT Images Using Different Beam Hardening Filters

    SciTech Connect

    Collier, J; Aldoohan, S; Gill, K

    2014-06-01

    Purpose: Reducing patient dose while maintaining (or even improving) image quality is one of the foremost goals in CT imaging. To this end, we consider the feasibility of optimizing CT scan protocols in conjunction with the application of different beam-hardening filtrations and assess this augmentation through noise-power spectrum (NPS) and detector quantum efficiency (DQE) analysis. Methods: American College of Radiology (ACR) and Catphan phantoms (The Phantom Laboratory) were scanned with a 64 slice CT scanner when additional filtration of thickness and composition (e.g., copper, nickel, tantalum, titanium, and tungsten) had been applied. A MATLAB-based code was employed to calculate the image of noise NPS. The Catphan Image Owl software suite was then used to compute the modulated transfer function (MTF) responses of the scanner. The DQE for each additional filter, including the inherent filtration, was then computed from these values. Finally, CT dose index (CTDIvol) values were obtained for each applied filtration through the use of a 100 mm pencil ionization chamber and CT dose phantom. Results: NPS, MTF, and DQE values were computed for each applied filtration and compared to the reference case of inherent beam-hardening filtration only. Results showed that the NPS values were reduced between 5 and 12% compared to inherent filtration case. Additionally, CTDIvol values were reduced between 15 and 27% depending on the composition of filtration applied. However, no noticeable changes in image contrast-to-noise ratios were noted. Conclusion: The reduction in the quanta noise section of the NPS profile found in this phantom-based study is encouraging. The reduction in both noise and dose through the application of beam-hardening filters is reflected in our phantom image quality. However, further investigation is needed to ascertain the applicability of this approach to reducing patient dose while maintaining diagnostically acceptable image qualities in a clinical setting.

  12. Metal-dielectric metameric filters for optically variable devices

    NASA Astrophysics Data System (ADS)

    Xiao, Lixiang; Chen, Nan; Deng, Zihao; Wang, Xiaozhong; Guo, Rong; Bu, Yikun

    2016-01-01

    A pair of metal-dielectric metameric filters that could create a hidden image was presented for the first time. The structure of the filters is simple and only six layers for filter A and five layers for filter B. The prototype filters were designed by using the film color target optimization method and the designed results show that, at normal observation angle, the reflected colors of the pair of filters are both green and the color difference index between them is only 0.9017. At observation angle of 60°, the filter A is violet and the filter B is blue. The filters were fabricated by remote plasma sputtering process and the experimental results were in accordance with the designs.

  13. Filtered multitensor tractography.

    PubMed

    Malcolm, James G; Shenton, Martha E; Rathi, Yogesh

    2010-09-01

    We describe a technique that uses tractography to drive the local fiber model estimation. Existing techniques use independent estimation at each voxel so there is no running knowledge of confidence in the estimated model fit. We formulate fiber tracking as recursive estimation: at each step of tracing the fiber, the current estimate is guided by those previous. To do this we perform tractography within a filter framework and use a discrete mixture of Gaussian tensors to model the signal. Starting from a seed point, each fiber is traced to its termination using an unscented Kalman filter to simultaneously fit the local model to the signal and propagate in the most consistent direction. Despite the presence of noise and uncertainty, this provides a causal estimate of the local structure at each point along the fiber. Using two- and three-fiber models we demonstrate in synthetic experiments that this approach significantly improves the angular resolution at crossings and branchings. In vivo experiments confirm the ability to trace through regions known to contain such crossing and branching while providing inherent path regularization. PMID:20805043

  14. The J-PAS filter system

    NASA Astrophysics Data System (ADS)

    Marin-Franch, Antonio; Taylor, Keith; Cenarro, Javier; Cristobal-Hornillos, David; Moles, Mariano

    2015-08-01

    J-PAS (Javalambre-PAU Astrophysical Survey) is a Spanish-Brazilian collaboration to conduct a narrow-band photometric survey of 8500 square degrees of northern sky using an innovative filter system of 59 filters, 56 relatively narrow-band (FWHM=14.5 nm) filters continuously populating the spectrum between 350 to 1000nm in 10nm steps, plus 3 broad-band filters. This filter system will be able to produce photometric redshifts with a precision of 0.003(1 + z) for Luminous Red Galaxies, allowing J-PAS to measure the radial scale of the Baryonic Acoustic Oscillations. The J-PAS survey will be carried out using JPCam, a 14-CCD mosaic camera using the new e2v 9k-by-9k, 10?m pixel, CCDs mounted on the JST/T250, a dedicated 2.55m wide-field telescope at the Observatorio Astrofísico de Javalambre (OAJ) near Teruel, Spain. The filters will operate in a fast (f/3.6) converging beam. The requirements for average transmissions greater than 85% in the passband, <10-5 blocking from 250 to 1050nm, steep bandpass edges and high image quality impose significant challenges for the production of the J-PAS filters that have demanded the development of new design solutions. This talk presents the J-PAS filter system and describes the most challenging requirements and adopted design strategies. Measurements and tests of the first manufactured filters are also presented.

  15. Stepping motor controller

    DOEpatents

    Bourret, Steven C. (Los Alamos, NM); Swansen, James E. (Los Alamos, NM)

    1984-01-01

    A stepping motor is microprocessingly controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  16. Stepping motor controller

    DOEpatents

    Bourret, S.C.; Swansen, J.E.

    1982-07-02

    A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  17. Step-Growth Polymerization.

    ERIC Educational Resources Information Center

    Stille, J. K.

    1981-01-01

    Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)

  18. Design of soft morphological filters by learning

    NASA Astrophysics Data System (ADS)

    Koivisto, Pertti T.; Kuosmanen, Pauli

    1994-05-01

    The choice and detailed design of the structuring elements play a pivotal role in soft morphological processing of images. This paper proposes a learning method for the optimization of the structuring elements of soft morphological filters under given optimization criterium. The learning method is based on simulated annealing. Experimental results depicted herein illustrate that the proposed method can be applied to finding optimal structuring systems in practical situations.

  19. Quantum Tomographic Reconstruction with Error Bars: a Kalman Filter Approach

    E-print Network

    Koenraad M. R. Audenaert; S. Scheel

    2009-02-19

    We present a novel quantum tomographic reconstruction method based on Bayesian inference via the Kalman filter update equations. The method not only yields the maximum likelihood/optimal Bayesian reconstruction, but also a covariance matrix expressing the measurement uncertainties in a complete way. From this covariance matrix the error bars on any derived quantity can be easily calculated. This is a first step towards the broader goal of devising an omnibus reconstruction method that could be adapted to any tomographic setup with little effort and that treats measurement uncertainties in a statistically well-founded way. In this first part we restrict ourselves to the important subclass of tomography based on measurements with discrete outcomes (as opposed to continuous ones), and we also ignore any measurement imperfections (dark counts, less than unit detector efficiency, etc.), which will be treated in a follow-up paper. We illustrate our general theory on real tomography experiments of quantum optical information processing elements.

  20. Genetically Engineered Microelectronic Infrared Filters

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    1998-01-01

    A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.

  1. A Filtering Method For Gravitationally Stratified Flows

    SciTech Connect

    Gatti-Bono, Caroline; Colella, Phillip

    2005-04-25

    Gravity waves arise in gravitationally stratified compressible flows at low Mach and Froude numbers. These waves can have a negligible influence on the overall dynamics of the fluid but, for numerical methods where the acoustic waves are treated implicitly, they impose a significant restriction on the time step. A way to alleviate this restriction is to filter out the modes corresponding to the fastest gravity waves so that a larger time step can be used. This paper presents a filtering strategy of the fully compressible equations based on normal mode analysis that is used throughout the simulation to compute the fast dynamics and that is able to damp only fast gravity modes.

  2. Efficient Fruit Defect Detection and Glare removal Algorithm by anisotropic diffusion and 2D Gabor filter

    E-print Network

    Katyal, Vini

    2012-01-01

    This paper focuses on fruit defect detection and glare removal using morphological operations, Glare removal can be considered as an important preprocessing step as uneven lighting may introduce it in images, which hamper the results produced through segmentation by Gabor filters .The problem of glare in images is very pronounced sometimes due to the unusual reflectance from the camera sensor or stray light entering, this method counteracts this problem and makes the defect detection much more pronounced. Anisotropic diffusion is used for further smoothening of the images and removing the high energy regions in an image for better defect detection and makes the defects more retrievable. Our algorithm is robust and scalable the employability of a particular mask for glare removal has been checked and proved useful for counteracting.this problem, anisotropic diffusion further enhances the defects with its use further Optimal Gabor filter at various orientations is used for defect detection.

  3. 2-Step IMAT and 2-Step IMRT in three dimensions

    SciTech Connect

    Bratengeier, Klaus

    2005-12-15

    In two dimensions, 2-Step Intensity Modulated Arc Therapy (2-Step IMAT) and 2-Step Intensity Modulated Radiation Therapy (IMRT) were shown to be powerful methods for the optimization of plans with organs at risk (OAR) (partially) surrounded by a target volume (PTV). In three dimensions, some additional boundary conditions have to be considered to establish 2-Step IMAT as an optimization method. A further aim was to create rules for ad hoc adaptations of an IMRT plan to a daily changing PTV-OAR constellation. As a test model, a cylindrically symmetric PTV-OAR combination was used. The centrally placed OAR can adapt arbitrary diameters with different gap widths toward the PTV. Along the rotation axis the OAR diameter can vary, the OAR can even vanish at some axis positions, leaving a circular PTV. The width and weight of the second segment were the free parameters to optimize. The objective function f to minimize was the root of the integral of the squared difference of the dose in the target volume and a reference dose. For the problem, two local minima exist. Therefore, as a secondary criteria, the magnitude of hot and cold spots were taken into account. As a result, the solution with a larger segment width was recommended. From plane to plane for varying radii of PTV and OAR and for different gaps between them, different sets of weights and widths were optimal. Because only one weight for one segment shall be used for all planes (respectively leaf pairs), a strategy for complex three-dimensional (3-D) cases was established to choose a global weight. In a second step, a suitable segment width was chosen, minimizing f for this global weight. The concept was demonstrated in a planning study for a cylindrically symmetric example with a large range of different radii of an OAR along the patient axis. The method is discussed for some classes of tumor/organ at risk combinations. Noncylindrically symmetric cases were treated exemplarily. The product of width and weight of the additional segment as well as the integral across the segment profile was demonstrated to be an important value. This product was up to a factor of 3 larger than in the 2-D case. Even in three dimensions, the optimized 2-Step IMAT increased the homogeneity of the dose distribution in the PTV profoundly. Rules for adaptation to varying target-OAR combinations were deduced. It can be concluded that 2-Step IMAT and 2-Step IMRT are also applicable in three dimensions. In the majority of cases, weights between 0.5 and 2 will occur for the additional segment. The width-weight product of the second segment is always smaller than the normalized radius of the OAR. The width-weight product of the additional segment is strictly connected to the relevant diameter of the organ at risk and the target volume. The derived formulas can be helpful to adapt an IMRT plan to altering target shapes.

  4. Stepped frequency ground penetrating radar

    DOEpatents

    Vadnais, Kenneth G. (Ojai, CA); Bashforth, Michael B. (Buellton, CA); Lewallen, Tricia S. (Ventura, CA); Nammath, Sharyn R. (Santa Barbara, CA)

    1994-01-01

    A stepped frequency ground penetrating radar system is described comprising an RF signal generating section capable of producing stepped frequency signals in spaced and equal increments of time and frequency over a preselected bandwidth which serves as a common RF signal source for both a transmit portion and a receive portion of the system. In the transmit portion of the system the signal is processed into in-phase and quadrature signals which are then amplified and then transmitted toward a target. The reflected signals from the target are then received by a receive antenna and mixed with a reference signal from the common RF signal source in a mixer whose output is then fed through a low pass filter. The DC output, after amplification and demodulation, is digitized and converted into a frequency domain signal by a Fast Fourier Transform. A plot of the frequency domain signals from all of the stepped frequencies broadcast toward and received from the target yields information concerning the range (distance) and cross section (size) of the target.

  5. Crowdsourcing step-by-step information extraction to enhance existing how-to videos

    E-print Network

    Nguyen, Phu Tran

    Millions of learners today use how-to videos to master new skills in a variety of domains. But browsing such videos is often tedious and inefficient because video player interfaces are not optimized for the unique step-by-step ...

  6. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  7. Stochastic Vorticity and Associated Filtering Theory

    SciTech Connect

    Amirdjanova, A.; Kallianpur, G.

    2002-12-19

    The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.

  8. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  9. Projection filters for modal parameter estimate for flexible structures

    NASA Technical Reports Server (NTRS)

    Huang, Jen-Kuang; Chen, Chung-Wen

    1987-01-01

    Single-mode projection filters are developed for eigensystem parameter estimates from both analytical results and test data. Explicit formulations of these projection filters are derived using the pseudoinverse matrices of the controllability and observability matrices in general use. A global minimum optimization algorithm is developed to update the filter parameters by using interval analysis method. Modal parameters can be attracted and updated in the global sense within a specific region by passing the experimental data through the projection filters. For illustration of this method, a numerical example is shown by using a one-dimensional global optimization algorithm to estimate model frequencies and dampings.

  10. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.; Green, Richard; Bian, Fuyan; Strauss, Michael A.; Buck, Zoë; Annis, James; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ?300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5? detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ?1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ?90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5? detection limits for point sources)

  11. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-optimized Co-adds over 300 deg2 in Five Filters

    NASA Astrophysics Data System (ADS)

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan; McGreer, Ian D.; Strauss, Michael A.; Annis, James; Buck, Zoë; Green, Richard; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg2 on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5? detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg2 of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5? detection limits for point sources).

  12. Direct construction of phase-only filters

    NASA Astrophysics Data System (ADS)

    Kallman, Robert R.

    1987-12-01

    A new direct construction of phase-only filters which have application for threshold optical correlation detectors is proposed. Simulations performed using 21 M48 model tank images and 21 M113 model armored peronnel carrier images illustrate the powerfulness of the method. It is found that the resulting filters and their optimized binarizations can be designed to contain a great deal of information and to be stable under perturbations in the training set. The present filters have higher SNR for true targets and a better discrimination performance against false targets than previous techniques.

  13. Recirculating electric air filter

    NASA Astrophysics Data System (ADS)

    Bergman, W.

    1985-01-01

    An electric air filter cartridge has a cylindrical inner high voltage electrode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  14. Recirculating electric air filter

    DOEpatents

    Bergman, Werner (Pleasanton, CA)

    1986-01-01

    An electric air filter cartridge has a cylindrical inner high voltage eleode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  15. Hepa filter dissolution process

    DOEpatents

    Brewer, Ken N. (Arco, ID); Murphy, James A. (Idaho Falls, ID)

    1994-01-01

    A process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.

  16. HEPA filter dissolution process

    DOEpatents

    Brewer, K.N.; Murphy, J.A.

    1994-02-22

    A process is described for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal. 4 figures.

  17. Recirculating electric air filter

    DOEpatents

    Bergman, W.

    1985-01-09

    An electric air filter cartridge has a cylindrical inner high voltage electrode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  18. B-spline design of digital FIR filter using evolutionary computation techniques

    NASA Astrophysics Data System (ADS)

    Swain, Manorama; Panda, Rutuparna

    2011-10-01

    In the forth coming era, digital filters are becoming a true replacement for the analog filter designs. Here in this paper we examine a design method for FIR filter using global search optimization techniques known as Evolutionary computation via genetic algorithm and bacterial foraging, where the filter design considered as an optimization problem. In this paper, an effort is made to design the maximally flat filters using generalized B-spline window. The key to our success is the fact that the bandwidth of the filer response can be modified by changing tuning parameters incorporated well within the B-spline function. This is an optimization problem. Direct approach has been deployed to design B-spline window based FIR digital filters. Four parameters (order, width, length and tuning parameter) have been optimized by using GA and EBFS. It is observed that the desired response can be obtained with lower order FIR filters with optimal width and tuning parameters.

  19. ON THE CONVERGENCE OF THE ENSEMBLE KALMAN FILTER

    PubMed Central

    Mandel, Jan; Cobb, Loren; Beezley, Jonathan D.

    2013-01-01

    Convergence of the ensemble Kalman filter in the limit for large ensembles to the Kalman filter is proved. In each step of the filter, convergence of the ensemble sample covariance follows from a weak law of large numbers for exchangeable random variables, the continuous mapping theorem gives convergence in probability of the ensemble members, and Lp bounds on the ensemble then give Lp convergence. PMID:24843228

  20. Global optimization of mesh quality D. Eppstein, Meshing Roundtable 20011 Global Optimization of Mesh Quality

    E-print Network

    Neumaier, Arnold

    Global optimization of mesh quality D. Eppstein, Meshing Roundtable 20011 Global Optimization;Global optimization of mesh quality D. Eppstein, Meshing Roundtable 20012 Introduction Mesh quality issues, meshing steps Connectivity optimization Delaunay triangulation, edge insertion Global point

  1. A mollified ensemble Kalman filter Kay Bergemann Sebastian Reich

    E-print Network

    Reich, Sebastian

    A mollified ensemble Kalman filter Kay Bergemann Sebastian Reich May 19, 2010 Abstract It is well Kalman filters, might lead to spurious high frequency adjustment processes in the model dynamics. Various arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step

  2. An Entanglement Filter

    E-print Network

    Ryo Okamoto; Jeremy L. O'Brien; Holger F. Hofmann; Tomohisa Nagata; Keiji Sasaki; Shigeki Takeuchi

    2009-05-01

    The ability to filter quantum states is a key capability in quantum information science and technology, in which one-qubit filters, or polarizers, have found wide application. Filtering on the basis of entanglement requires extension to multi-qubit filters with qubit-qubit interactions. We demonstrated an optical entanglement filter that passes a pair of photons if they have the desired correlations of their polarization. Such devices have many important applications to quantum technologies.

  3. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  4. A method for improving time-stepping numerics

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-04-01

    In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.

  5. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  6. Design-Filter Selection for H2 Control of Microgravity Isolation Systems: A Single-Degree-of-Freedom Case Study

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Whorton, Mark S.

    2000-01-01

    Many microgravity space-science experiments require active vibration isolation, to attain suitably low levels of background acceleration for useful experimental results. The design of state-space controllers by optimal control methods requires judicious choices of frequency-weighting design filters. Kinematic coupling among states greatly clouds designer intuition in the choices of these filters, and the masking effects of the state observations cloud the process further. Recent research into the practical application of H2 synthesis methods to such problems, indicates that certain steps can lead to state frequency-weighting design-filter choices with substantially improved promise of usefulness, even in the face of these difficulties. In choosing these filters on the states, one considers their relationships to corresponding design filters on appropriate pseudo-sensitivity- and pseudo-complementary-sensitivity functions. This paper investigates the application of these considerations to a single-degree-of-freedom microgravity vibration-isolation test case. Significant observations that were noted during the design process are presented. along with explanations based on the existent theory for such problems.

  7. Design of optical bandpass filters based on a two-material multilayer structure.

    PubMed

    Belyaev, B A; Tyurnev, V V; Shabanov, V F

    2014-06-15

    An easy method for designing filters with equalized passband ripples of a given magnitude is proposed. The filter, which is made of two dielectric materials, comprises coupled half-wavelength resonators and multilayer mirrors. The filter design begins with the synthesis of the multimaterial filter prototype whose mirrors consist of quarter-wavelength layers. Optimal refractive indices of the layers in the prototype are obtained by a special optimization based on universal rules. The thicknesses of the mirrors' layers in the final filter are computed using derived formulas. A design procedure example for silicon-air bandpass filters with a fractional bandwidth of 1% is described. PMID:24978524

  8. Towards robust particle filters for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Peter Jan

    2015-04-01

    In recent years particle filters have matured and several variants are now available that are not degenerate for high-dimensional systems. Often they are based on ad-hoc combinations with Ensemble Kalman Filters. Unfortunately it is unclear what approximations are made when these hybrids are used. The proper way to derive particle filters for high-dimensional systems is exploring the freedom in the proposal density. It is well known that using an Ensemble Kalman Filter as proposal density (the so-called Weighted Ensemble Kalman Filter) does not work for high-dimensional systems. However, much better results are obtained when weak-constraint 4Dvar is used as proposal, leading to the implicit particle filter. Still this filter is degenerate when the number of independent observations is large. The Equivalent-Weights Particle Filter is a filter that works well in systems of arbitrary dimensions, but it contains a few tuning parameters that have to be chosen well to avoid biases. In this paper we discuss ways to derive more robust particle filters for high-dimensional systems. Using ideas from large-deviation theory and optimal transportation particle filters will be generated that are robust and work well in these systems. It will be shown that all successful filters can be derived from one general framework. Also, the performance of the filters will be tested on simple but high-dimensional systems, and, if time permits, on a high-dimensional highly nonlinear barotropic vorticity equation model.

  9. A superior edge preserving filter with a systematic analysis

    NASA Technical Reports Server (NTRS)

    Holladay, Kenneth W.; Rickman, Doug

    1991-01-01

    A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data.

  10. Gabor filter based fingerprint image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  11. Rigid porous filter

    DOEpatents

    Chiang, Ta-Kuan (Morgantown, WV); Straub, Douglas L. (Morgantown, WV); Dennis, Richard A. (Morgantown, WV)

    2000-01-01

    The present invention involves a porous rigid filter including a plurality of concentric filtration elements having internal flow passages and forming external flow passages there between. The present invention also involves a pressure vessel containing the filter for the removal of particulates from high pressure particulate containing gases, and further involves a method for using the filter to remove such particulates. The present filter has the advantage of requiring fewer filter elements due to the high surface area-to-volume ratio provided by the filter, requires a reduced pressure vessel size, and exhibits enhanced mechanical design properties, improved cleaning properties, configuration options, modularity and ease of fabrication.

  12. Cordierite silicon nitride filters

    SciTech Connect

    Sawyer, J.; Buchan, B. ); Duiven, R.; Berger, M. ); Cleveland, J.; Ferri, J. )

    1992-02-01

    The objective of this project was to develop a silicon nitride based crossflow filter. This report summarizes the findings and results of the project. The project was phased with Phase I consisting of filter material development and crossflow filter design. Phase II involved filter manufacturing, filter testing under simulated conditions and reporting the results. In Phase I, Cordierite Silicon Nitride (CSN) was developed and tested for permeability and strength. Target values for each of these parameters were established early in the program. The values were met by the material development effort in Phase I. The crossflow filter design effort proceeded by developing a macroscopic design based on required surface area and estimated stresses. Then the thermal and pressure stresses were estimated using finite element analysis. In Phase II of this program, the filter manufacturing technique was developed, and the manufactured filters were tested. The technique developed involved press-bonding extruded tiles to form a filter, producing a monolithic filter after sintering. Filters manufactured using this technique were tested at Acurex and at the Westinghouse Science and Technology Center. The filters did not delaminate during testing and operated and high collection efficiency and good cleanability. Further development in areas of sintering and filter design is recommended.

  13. Filter type gas sampler with filter consolidation

    DOEpatents

    Miley, Harry S. (219 Rockwood Dr., Richland, WA 99352); Thompson, Robert C. (5313 Phoebe La., West Richland, WA 99352); Hubbard, Charles W. (1900 Stevens, Apt. 526, Richland, WA 99352); Perkins, Richard W. (1413 Sunset, Richland, WA 99352)

    1997-01-01

    Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, whereafter the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant.

  14. Filter type gas sampler with filter consolidation

    DOEpatents

    Miley, H.S.; Thompson, R.C.; Hubbard, C.W.; Perkins, R.W.

    1997-03-25

    Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, where after the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant. 5 figs.

  15. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 2012-07-01 false PM sampling media (e.g., filters) preconditioning and...Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and...the following steps to prepare PM sampling media (e.g., filters) and equipment...

  16. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 2011-07-01 false PM sampling media (e.g., filters) preconditioning and...Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and...the following steps to prepare PM sampling media (e.g., filters) and equipment...

  17. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 2013-07-01 false PM sampling media (e.g., filters) preconditioning and...Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and...the following steps to prepare PM sampling media (e.g., filters) and equipment...

  18. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 2014-07-01 false PM sampling media (e.g., filters) preconditioning and...Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and...the following steps to prepare PM sampling media (e.g., filters) and equipment...

  19. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  20. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  1. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  2. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  3. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  4. POLYNOMIAL-BASED DIGITAL FILTERS AS PROTOTYPE FILTERS IN DFT MODULATED FILTER BANKS

    E-print Network

    Göckler, Heinz G.

    POLYNOMIAL-BASED DIGITAL FILTERS AS PROTOTYPE FILTERS IN DFT MODULATED FILTER BANKS 1 Djordje Babic investigate the possibility to use polyno- mial-based digital FIR filters as prototype filters in DFT and cosine modulated filter banks. In order to apply the FIR filter with piecewise polynomial response

  5. Thermal control design of the Lightning Mapper Sensor narrow-band spectral filter

    NASA Technical Reports Server (NTRS)

    Flannery, Martin R.; Potter, John; Raab, Jeff R.; Manlief, Scott K.

    1992-01-01

    The performance of the Lightning Mapper Sensor is dependent on the temperature shifts of its narrowband spectral filter. To perform over a 10 degree FOV with an 0.8 nm bandwidth, the filter must be 15 cm in diameter and mounted externally to the telescope optics. The filter thermal control required a filter design optimized for minimum bandpass shift with temperature, a thermal analysis of substrate materials for maximum temperature uniformity, and a thermal radiation analysis to determine the parameter sensitivity of the radiation shield for the filter, the filter thermal recovery time after occultation, and heater power to maintain filter performance in the earth-staring geosynchronous environment.

  6. Reading out population codes with a matched filter 

    E-print Network

    van Rossum, Mark; Renart, Alfonso; Nelson, Sacha; Wang, X.-J.; Turrigiano, Gina G.

    2001-01-01

    We study the optimal way to decode information present in a population code. Using a matched filter, the performance in Gaussian additive noise is as good as the theoretical maximum. The scheme can be applied when ...

  7. Bag filters for TPP

    SciTech Connect

    L.V. Chekalov; Yu.I. Gromov; V.V. Chekalov

    2007-05-15

    Cleaning of TPP flue gases with bag filters capable of pulsed regeneration is examined. A new filtering element with a three-dimensional filtering material formed from a needle-broached cloth in which the filtration area, as compared with a conventional smooth bag, is increased by more than two times, is proposed. The design of a new FRMI type of modular filter is also proposed. A standard series of FRMI filters with a filtration area ranging from 800 to 16,000 m{sup 2} is designed for an output more than 1 million m{sub 3}/h of with respect to cleaned gas. The new bag filter permits dry collection of sulfur oxides from waste gases at TPP operating on high-sulfur coals. The design of the filter makes it possible to replace filter elements without taking the entire unit out of service.

  8. SOLUTION OF A GROUNDWATER CONTROL PROBLEM WITH IMPLICIT FILTERING \\Lambda

    E-print Network

    SOLUTION OF A GROUNDWATER CONTROL PROBLEM WITH IMPLICIT FILTERING \\Lambda A. BATTERMANN y , J. M an industrial site. Key words. Implicit filtering, Groundwater flow and transport, Optimal control, Parallel on a groundwater temperature control problem. This problem has some of the impor­ tant difficulties

  9. MST Filterability Tests

    SciTech Connect

    Poirier, M. R.; Burket, P. R.; Duignan, M. R.

    2015-03-12

    The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). The low filter flux through the ARP has limited the rate at which radioactive liquid waste can be treated. Recent filter flux has averaged approximately 5 gallons per minute (gpm). Salt Batch 6 has had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. In addition, at the time the testing started, SRR was assessing the impact of replacing the 0.1 micron filter with a 0.5 micron filter. This report describes testing of MST filterability to investigate the impact of filter pore size and MST particle size on filter flux and testing of filter enhancers to attempt to increase filter flux. The authors constructed a laboratory-scale crossflow filter apparatus with two crossflow filters operating in parallel. One filter was a 0.1 micron Mott sintered SS filter and the other was a 0.5 micron Mott sintered SS filter. The authors also constructed a dead-end filtration apparatus to conduct screening tests with potential filter aids and body feeds, referred to as filter enhancers. The original baseline for ARP was 5.6 M sodium salt solution with a free hydroxide concentration of approximately 1.7 M.3 ARP has been operating with a sodium concentration of approximately 6.4 M and a free hydroxide concentration of approximately 2.5 M. SRNL conducted tests varying the concentration of sodium and free hydroxide to determine whether those changes had a significant effect on filter flux. The feed slurries for the MST filterability tests were composed of simple salts (NaOH, NaNO2, and NaNO3) and MST (0.2 – 4.8 g/L). The feed slurry for the filter enhancer tests contained simulated salt batch 6 supernate, MST, and filter enhancers.

  10. Fouling of ceramic filters and thin-film composite reverse osmosis membranes by inorganic and bacteriological constituents

    SciTech Connect

    Siler, J.L.; Poirier, M.R.; McCabe, D.J.; Hazen, T.C.

    1991-12-31

    Two significant problems have been identified during the first three years of operating the Savannah River Site Effluent Treatment Facility. These problems encompass two of the facility`s major processing areas: the microfiltration and reverse osmosis steps. The microfilters (crossflow ceramic filters {minus}0.2{mu} nominal pore size) have been prone to pluggage problems. The presence of bacteria and bacteria byproducts in the microfilter feed, along with small quantities of colloidal iron, silica, and aluminum, results in a filter foulant that rapidly deteriorates filter performance and is difficult to remove by chemical cleaning. Processing rates through the filters have dropped from the design flow rate of 300 gpm after cleaning to 60 gpm within minutes. The combination of bacteria (from internal sources) and low concentrations of inorganic species resulted in substantial reductions in the reverse osmosis system performance. The salt rejection has been found to decrease from 99+% to 97%, along with a 50% loss in throughput, within a few hours of cleaning. Experimental work has led to implementation of several changes to plant operation and to planned upgrades of existing equipment. It has been shown that biological control in the influent is necessary to achieve design flowrates. Experiments have also shown that the filter performance can be optimized by the use of efficient filter backpulsing and the addition of aluminum nitrate (15 to 30 mg/L Al{sup 3+}) to the filter feed. The aluminum nitrate assists by controlling adsorption of colloidal inorganic precipitates and biological contaminants. In addition, improved cleaning procedures have been identified for the reverse osmosis units. This paper provides a summary of the plant problems and the experimental work that has been completed to understand and correct these problems.

  11. Fouling of ceramic filters and thin-film composite reverse osmosis membranes by inorganic and bacteriological constituents

    SciTech Connect

    Siler, J.L.; Poirier, M.R.; McCabe, D.J.; Hazen, T.C.

    1991-01-01

    Two significant problems have been identified during the first three years of operating the Savannah River Site Effluent Treatment Facility. These problems encompass two of the facility's major processing areas: the microfiltration and reverse osmosis steps. The microfilters (crossflow ceramic filters {minus}0.2{mu} nominal pore size) have been prone to pluggage problems. The presence of bacteria and bacteria byproducts in the microfilter feed, along with small quantities of colloidal iron, silica, and aluminum, results in a filter foulant that rapidly deteriorates filter performance and is difficult to remove by chemical cleaning. Processing rates through the filters have dropped from the design flow rate of 300 gpm after cleaning to 60 gpm within minutes. The combination of bacteria (from internal sources) and low concentrations of inorganic species resulted in substantial reductions in the reverse osmosis system performance. The salt rejection has been found to decrease from 99+% to 97%, along with a 50% loss in throughput, within a few hours of cleaning. Experimental work has led to implementation of several changes to plant operation and to planned upgrades of existing equipment. It has been shown that biological control in the influent is necessary to achieve design flowrates. Experiments have also shown that the filter performance can be optimized by the use of efficient filter backpulsing and the addition of aluminum nitrate (15 to 30 mg/L Al{sup 3+}) to the filter feed. The aluminum nitrate assists by controlling adsorption of colloidal inorganic precipitates and biological contaminants. In addition, improved cleaning procedures have been identified for the reverse osmosis units. This paper provides a summary of the plant problems and the experimental work that has been completed to understand and correct these problems.

  12. Survey of digital filtering

    NASA Technical Reports Server (NTRS)

    Nagle, H. T., Jr.

    1972-01-01

    A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.

  13. Analysis of characteristic of microwave regeneration for diesel particulate filter

    SciTech Connect

    Ning Zhi; Zhang Guanglong; Lu Yong; Liu Junmin; Gao Xiyan; Liang Iunhui; Chen Jiahua

    1995-12-31

    The mathematical model for the microwave regeneration of diesel particulate filter is proposed according to the characteristic of microwave regeneration process. The model is used to calculate the temperature field, distribution of particulate and density field of oxygen in the filter during the process of regeneration with typical ceramic foam particulate filter data. The parametric study demonstrates how some of the main parameters, such as microwave attenuation constant of the filter, filter particulate loading, the power and distribution of microwave energy and so on, affect the efficiency of regeneration, the maximum filter temperature and regeneration duration. The results show that it is possible to regenerate the diesel particulate filters in certain conditions by using microwave energy. This paper can give one a whole understanding to several main factors that have effects on the process of microwave regeneration and provide a theoretical basis for the optimal design of the microwave regeneration system.

  14. Highly tunable microwave and millimeter wave filtering using photonic technology

    NASA Astrophysics Data System (ADS)

    Seregelyi, Joe; Lu, Ping; Paquet, Stéphane; Celo, Dritan; Mihailov, Stephen J.

    2015-05-01

    The design for a photonic microwave filter tunable in both bandwidth and operating frequency is proposed and experimentally demonstrated. The circuit is based on a single sideband modulator used in conjunction with two or more transmission fiber Bragg gratings (FBGs) cascaded in series. It is demonstrated that the optical filtering characteristics of the FBGs are instrumental in defining the shape of the microwave filter, and the numerical modeling was used to optimize these characteristics. A multiphase-shift transmission FBG design is used to increase the dynamic range of the filter, control the filter ripple, and maximize the slope of the filter skirts. Initial measurements confirmed the design theory and demonstrated a working microwave filter with a bandwidth tunable from approximately 2 to 3.5 GHz and an 18 GHz operating frequency tuning range. Further work is required to refine the FBG manufacturing process and reduce the impact of fabrication errors.

  15. Robust ensemble filtering and its relation to covariance inflation in the ensemble Kalman filter

    E-print Network

    Xiaodong Luo; Ibrahim Hoteit

    2011-07-31

    We propose a robust ensemble filtering scheme based on the $H_{\\infty}$ filtering theory. The optimal $H_{\\infty}$ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the $H_{\\infty}$ filter is more robust than the Kalman filter, in the sense that the estimation error in the $H_{\\infty}$ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the $H_{\\infty}$ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore we introduce a variant that solves some time-local constraints instead, and hence we call it the time-local $H_{\\infty}$ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), we also propose the concept of ensemble time-local $H_{\\infty}$ filter (EnTLHF). We outline the general form of the EnTLHF, and discuss some of its special cases. In particular, we show that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. We use some numerical examples to assess the relative robustness of the TLHF/EnTLHF in comparison with the corresponding KF/EnKF method.

  16. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  17. Algorithm Bibliography Baby Step, Giant Step

    E-print Network

    Babinkostova, Liljana

    Bibliography Then we calculate Q - jmP for j = 0, 1, .., 7, or until we find a point equal to one of the iP of order n, and let P, Q G. The problem is to find an integer k such that kP = Q. The Baby Step, Giant an integer m n and compute mP. 2. Compute and store a list for each iP for 1 i m - 1. 3. Compute

  18. Novel Backup Filter Device for Candle Filters

    SciTech Connect

    Bishop, B.; Goldsmith, R.; Dunham, G.; Henderson, A.

    2002-09-18

    The currently preferred means of particulate removal from process or combustion gas generated by advanced coal-based power production processes is filtration with candle filters. However, candle filters have not shown the requisite reliability to be commercially viable for hot gas clean up for either integrated gasifier combined cycle (IGCC) or pressurized fluid bed combustion (PFBC) processes. Even a single candle failure can lead to unacceptable ash breakthrough, which can result in (a) damage to highly sensitive and expensive downstream equipment, (b) unacceptably low system on-stream factor, and (c) unplanned outages. The U.S. Department of Energy (DOE) has recognized the need to have fail-safe devices installed within or downstream from candle filters. In addition to CeraMem, DOE has contracted with Siemens-Westinghouse, the Energy & Environmental Research Center (EERC) at the University of North Dakota, and the Southern Research Institute (SRI) to develop novel fail-safe devices. Siemens-Westinghouse is evaluating honeycomb-based filter devices on the clean-side of the candle filter that can operate up to 870 C. The EERC is developing a highly porous ceramic disk with a sticky yet temperature-stable coating that will trap dust in the event of filter failure. SRI is developing the Full-Flow Mechanical Safeguard Device that provides a positive seal for the candle filter. Operation of the SRI device is triggered by the higher-than-normal gas flow from a broken candle. The CeraMem approach is similar to that of Siemens-Westinghouse and involves the development of honeycomb-based filters that operate on the clean-side of a candle filter. The overall objective of this project is to fabricate and test silicon carbide-based honeycomb failsafe filters for protection of downstream equipment in advanced coal conversion processes. The fail-safe filter, installed directly downstream of a candle filter, should have the capability for stopping essentially all particulate bypassing a broken or leaking candle while having a low enough pressure drop to allow the candle to be backpulse-regenerated. Forward-flow pressure drop should increase by no more than 20% because of incorporation of the fail-safe filter.

  19. HEPA filter encapsulation

    DOEpatents

    Gates-Anderson, Dianne D. (Union City, CA); Kidd, Scott D. (Brentwood, CA); Bowers, John S. (Manteca, CA); Attebery, Ronald W. (San Lorenzo, CA)

    2003-01-01

    A low viscosity resin is delivered into a spent HEPA filter or other waste. The resin is introduced into the filter or other waste using a vacuum to assist in the mass transfer of the resin through the filter media or other waste.

  20. Practical Active Capacitor Filter

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr. (Inventor)

    2005-01-01

    A method and apparatus is described that filters an electrical signal. The filtering uses a capacitor multiplier circuit where the capacitor multiplier circuit uses at least one amplifier circuit and at least one capacitor. A filtered electrical signal results from a direct connection from an output of the at least one amplifier circuit.

  1. Filter service system

    DOEpatents

    Sellers, Cheryl L. (Peoria, IL); Nordyke, Daniel S. (Arlington Heights, IL); Crandell, Richard A. (Morton, IL); Tomlins, Gregory (Peoria, IL); Fei, Dong (Peoria, IL); Panov, Alexander (Dunlap, IL); Lane, William H. (Chillicothe, IL); Habeger, Craig F. (Chillicothe, IL)

    2008-12-09

    According to an exemplary embodiment of the present disclosure, a system for removing matter from a filtering device includes a gas pressurization assembly. An element of the assembly is removably attachable to a first orifice of the filtering device. The system also includes a vacuum source fluidly connected to a second orifice of the filtering device.

  2. Complex Impedance Electronic Filters

    E-print Network

    Vickers, James

    Complex Impedance 12.6 Electronic Filters Electronic filters are used widely, for example with the frequency of the input voltage. A filter must have at least one component with has an impedance that varies with frequency. The impedance is given by the time dependent ratio of voltage across the component to current

  3. Stratified Filtered Sampling in Stochastic Optimization

    E-print Network

    Mitchell, John E.

    and Background Many significant problems dictate the development of strategies for handling sequential decision­making be taken or some 1 #12; Figure 1: Depiction of sequential decision making under uncertainty. (Heavy arcs York September 22, 1999 Abstract We develop a methodology for evaluating a decision strategy gener

  4. OPTIMAL FILTERING TECHNIQUES FOR ANALYTICAL STREAMFLOW FORECASTING

    E-print Network

    Simon, Dan

    over the rocks, hard soil or ponds, lakes, and streams, produces direct runoff. Some water gets moisture state values, to improve streamflow predictions. In general hydrology is the study of the waters and character of water in streams, lakes and water on or below the land surface. The Sacramento model

  5. Performance analysis of ?- ?- ?tracking filters using position and velocity measurements

    NASA Astrophysics Data System (ADS)

    Saho, Kenshi; Masugi, Masao

    2015-12-01

    This paper examines the performance of two position-velocity-measured (PVM) ?- ?- ? tracking filters. The first estimates the target acceleration using the measured velocity, and the second, which is proposed for the first time in this paper, estimates acceleration using the measured position. To quantify the performance of these PVM ?- ?- ? filters, we analytically derive steady-state errors that assume that the target is moving with constant acceleration or jerk. With these performance indices, the optimal gains of the PVM ?- ?- ? filters are determined using a minimum-variance filter criterion. The performance of each filter under these optimal gains is then analyzed and compared. Numerical analyses clarify the performance of the PVM ?- ?- ? filters and verify that their accuracy is better than that of the general position-only-measured ?- ?- ? filter, even when the variance in velocity measurement noise is comparatively large. We identify the conditions under which the proposed PVM ?- ?- ? filter outperforms the general ?- ?- ? filter for different ratios of noise variance in the velocity and position measurements. Finally, numerical simulations verify the effectiveness of the PVM ?- ?- ? filters for a realistic maneuvering target.

  6. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  7. STEP Experiment Requirements

    NASA Technical Reports Server (NTRS)

    Brumfield, M. L. (compiler)

    1984-01-01

    A plan to develop a space technology experiments platform (STEP) was examined. NASA Langley Research Center held a STEP Experiment Requirements Workshop on June 29 and 30 and July 1, 1983, at which experiment proposers were invited to present more detailed information on their experiment concept and requirements. A feasibility and preliminary definition study was conducted and the preliminary definition of STEP capabilities and experiment concepts and expected requirements for support services are presented. The preliminary definition of STEP capabilities based on detailed review of potential experiment requirements is investigated. Topics discussed include: Shuttle on-orbit dynamics; effects of the space environment on damping materials; erectable beam experiment; technology for development of very large solar array deployers; thermal energy management process experiment; photovoltaic concentrater pointing dynamics and plasma interactions; vibration isolation technology; flight tests of a synthetic aperture radar antenna with use of STEP.

  8. Aircraft Recirculation Filter for Air-Quality and Incident Assessment

    PubMed Central

    Eckels, Steven J.; Jones, Byron; Mann, Garrett; Mohan, Krishnan R.; Weisel, Clifford P.

    2015-01-01

    The current research examines the possibility of using recirculation filters from aircraft to document the nature of air-quality incidents on aircraft. These filters are highly effective at collecting solid and liquid particulates. Identification of engine oil contaminants arriving through the bleed air system on the filter was chosen as the initial focus. A two-step study was undertaken. First, a compressor/bleed air simulator was developed to simulate an engine oil leak, and samples were analyzed with gas chromatograph-mass spectrometry. These samples provided a concrete link between tricresyl phosphates and a homologous series of synthetic pentaerythritol esters from oil and contaminants found on the sample paper. The second step was to test 184 used aircraft filters with the same gas chromatograph-mass spectrometry system; of that total, 107 were standard filters, and 77 were nonstandard. Four of the standard filters had both markers for oil, with the homologous series synthetic pentaerythritol esters being the less common marker. It was also found that 90% of the filters had some detectable level of tricresyl phosphates. Of the 77 nonstandard filters, 30 had both markers for oil, a significantly higher percent than the standard filters. PMID:25641977

  9. Compact planar microwave blocking filters

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop (Inventor); Wollack, Edward J. (Inventor)

    2012-01-01

    A compact planar microwave blocking filter includes a dielectric substrate and a plurality of filter unit elements disposed on the substrate. The filter unit elements are interconnected in a symmetrical series cascade with filter unit elements being organized in the series based on physical size. In the filter, a first filter unit element of the plurality of filter unit elements includes a low impedance open-ended line configured to reduce the shunt capacitance of the filter.

  10. Regenerative particulate filter development

    NASA Technical Reports Server (NTRS)

    Descamp, V. A.; Boex, M. W.; Hussey, M. W.; Larson, T. P.

    1972-01-01

    Development, design, and fabrication of a prototype filter regeneration unit for regenerating clean fluid particle filter elements by using a backflush/jet impingement technique are reported. Development tests were also conducted on a vortex particle separator designed for use in zero gravity environment. A maintainable filter was designed, fabricated and tested that allows filter element replacement without any leakage or spillage of system fluid. Also described are spacecraft fluid system design and filter maintenance techniques with respect to inflight maintenance for the space shuttle and space station.

  11. Laser radar based relative navigation using improved adaptive Huber filter

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoliang; Gong, Deren; Xu, Liqiang; Shao, Xiaowei; Duan, Dengping

    2011-06-01

    An improved adaptive Huber filter algorithm is proposed to model error and measurement noise uncertainty in this work. The adaptive algorithm for model error is obtained by using an upper bound for the state prediction covariance matrix with augment of chi-square statistical hypothesis test in case of filter deteriorated by wrong residual information. The measurement noise is estimated at each filter step by minimizing a criterion function which was original from Huber filter. A recursive algorithm is provided for solving the criterion function. The proposed adaptive filter algorithm was successfully implemented in radar navigation system for spacecraft formation flying in high earth orbits with real orbit perturbations and non-Gaussian random measurement error. Simulation results indicated that the proposed adaptive filter performed better in robustness and accuracy compared with previous adaptive algorithms.

  12. Method of producing monolithic ceramic cross-flow filter

    DOEpatents

    Larsen, David A. (Clifton Park, NY); Bacchi, David P. (Schenectady, NY); Connors, Timothy F. (Watervliet, NY); Collins, III, Edwin L. (Albany, NY)

    1998-01-01

    Ceramic filter of various configuration have been used to filter particulates from hot gases exhausted from coal-fired systems. Prior ceramic cross-flow filters have been favored over other types, but those previously horn have been assemblies of parts somehow fastened together and consequently subject often to distortion or delamination on exposure hot gas in normal use. The present new monolithic, seamless, cross-flow ceramic filters, being of one-piece construction, are not prone to such failure. Further, these new products are made by novel casting process which involves the key steps of demolding the ceramic filter green body so that none of the fragile inner walls of the filter is cracked or broken.

  13. Method of producing monolithic ceramic cross-flow filter

    DOEpatents

    Larsen, D.A.; Bacchi, D.P.; Connors, T.F.; Collins, E.L. III

    1998-02-10

    Ceramic filter of various configuration have been used to filter particulates from hot gases exhausted from coal-fired systems. Prior ceramic cross-flow filters have been favored over other types, but those previously have been assemblies of parts somehow fastened together and consequently subject often to distortion or delamination on exposure hot gas in normal use. The present new monolithic, seamless, cross-flow ceramic filters, being of one-piece construction, are not prone to such failure. Further, these new products are made by a novel casting process which involves the key steps of demolding the ceramic filter green body so that none of the fragile inner walls of the filter is cracked or broken. 2 figs.

  14. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers

    PubMed Central

    Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m?2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m?2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins.

  15. Adaptive Linear Prediction of Radiation Belt Electrons Using the Kalman Filter

    E-print Network

    Adaptive Linear Prediction of Radiation Belt Electrons Using the Kalman Filter E.J. Rigler, D to changes in solar wind bulk speed using linear prediction filters [Baker et al., 1990; Vassiliadis et al on the Kalman Filter with process noise, to determine optimal time- dependent electron response functions

  16. Adaptive linear prediction of radiation belt electrons using the Kalman filter

    E-print Network

    Adaptive linear prediction of radiation belt electrons using the Kalman filter E. J. Rigler, D. N system identification scheme, based on the Kalman filter with process noise, to determine optimal time: Energetic particles, trapped; 2722 Magnetospheric Physics: Forecasting; KEYWORDS: Kalman filter, electron

  17. Control and Intelligent Systems, Vol. 35, No. 2, 2007 REDUCED ORDER KALMAN FILTERING

    E-print Network

    Simon, Dan

    Control and Intelligent Systems, Vol. 35, No. 2, 2007 REDUCED ORDER KALMAN FILTERING WITHOUT MODEL REDUCTION D. Simon* Abstract This paper presents an optimal discrete time reduced order Kalman filter of the estimation error covariance. Key Words Kalman filter, state estimatioh, order reduction 1. Introduction

  18. Forecasting change of the magnetic field using core surface flows and ensemble Kalman filtering

    E-print Network

    Forecasting change of the magnetic field using core surface flows and ensemble Kalman filtering C-based observatories. We therefore present a method using Ensemble Kalman Filtering (EnKF) to produce an optimal (2009), Forecasting change of the magnetic field using core surface flows and ensemble Kalman filtering

  19. Behavioral/Cognitive Kalman Filtering Naturally Accounts for Visually Guided and

    E-print Network

    Blohm, Gunnar

    Behavioral/Cognitive Kalman Filtering Naturally Accounts for Visually Guided and Predictive Smooth with predictions about future events. Here, we propose that Kalman filtering can account for the dynamics of both)onemaintainingadynamicinternalmemoryoftargetmotion.TheoutputsofbothKalman filters are then combined in a statistically optimal manner, i.e., weighted

  20. Handling nonlinearity in Ensemble Kalman Filter: Experiments with1 the three-variable Lorenz model2

    E-print Network

    Maryland at College Park, University of

    1 Handling nonlinearity in Ensemble Kalman Filter: Experiments with1 the three-variable Lorenz A deterministic Ensemble Kalman Filter (EnKF) with a large enough ensemble is optimal2 for linear models, since and the mean within the Local Ensemble Transform Kalman9 Filter (LETKF) framework is applied to achieve

  1. Filter Bank Addresses and Frequencies Filter Freq (MHz) Addresess (ChA/B)

    E-print Network

    Filter Bank Addresses and Frequencies Filter Freq (MHz) Addresess (ChA/B) external high pass 1320 Injected Noise for Cals ONLY ONE CHANNEL SHOWN for filter bank FILTER BANK input unit 1 to 8 Power dividerA High Pass Filter 1320 MHz Filter #6 Filter #5 Filter #4 Filter #3 Filter #2 Filter #8 FILTER BANK

  2. Stepping Motor Control System

    E-print Network

    Larson, Noble G.

    This paper describes a hardware system designed to facilitate position and velocity control of a group of eight stepping motors using a PDP-11. The system includes motor driver cards and other interface cards in addition ...

  3. Developing metal coated mesh filters for mid-infrared astronomy

    NASA Astrophysics Data System (ADS)

    Sako, Shigeyuki; Miyata, Takashi; Kamizuka, Takafumi; Nakamura, Tomohiko; Asano, Kentaro; Uchiyama, Mizuho; Onaka, Takashi; Sakon, Itsuki; Wada, Takehiko

    2012-09-01

    A metal mesh filter is appropriate to a band-pass filter for astronomy in the long mid-infrared between 25 and 40 ?m, where most of optical materials are opaque. The mesh filter does not require transparent dielectric materials unlike interference filters because the transmission characteristics bare determined by surface plasmon-polariton (SPP) resonances excited on a metal surface with a periodic structure. In this study, we have developed the mesh filters optimized to atmospheric windows at 31.8 and 37.5 ?m accessible from the Chajnantor site of 5,640 m altitude. First, mesh filters made of a gold film of 2 ?m thickness have been fabricated. Four identical film-type filters are stacked incoherently to suppress leakages at stop-bands. The transmissions of the stacked filters have been measured to be 0.8 at the peaks and below 1 x 10-3 at the stop-bands at 4 K. The ground-based mid-infrared camera MAX38 has been equipped with the stacked filters and successfully obtained diffraction-limited stellar images at the Chajnantor site. The film-type mesh filter does not have sufficient mechanical strength for a larger aperture and for use in space. We have developed mesh filters with higher strength by applying the membrane technology for x-ray optics. The membrane-type mesh filter is made of SiC and coated with a thin gold layer. The optical performance of the mesh filter is independent of internal materials in principle because the SPP resonances are excited only on the metal surface. The fabricated membrane-type mesh filter has been confirmed to provide comparable optical performance to the film-type mesh filter.

  4. Adaptive weighted median filter utilizing impulsive noise detection

    NASA Astrophysics Data System (ADS)

    Ishihara, Jun; Meguro, Mitsuhiko; Hamada, Nozomu

    1999-10-01

    The removal of noise in image is one of the current important issues. It is also useful as a preprocessing for edge detection, motion estimation and so on. In this paper, an adaptive weighted median filter utilizing impulsive noise detection is proposed for the removal of impulsive noise in digital images. The aim of our proposed method is to eliminate impulsive noise effectively preserving original fine detail in images. This aim is same for another median-type nonlinear filters try to realized. In our method, we use weighted median filter whose weights should be determined by balancing between the signal preserving ability and noise reduction performance. The trade off between these two inconsistent properties is realized using the noise detection mechanism and optimized adaptation process. In the previous work, threshold value between the signal and the output of the median filter have to be decided for the noise detection. Adaptive algorithm for optimizing WM filters uses the teacher image for training process. In our method, following two new approaches are introduced in the filtering. (1) The noise detection process uses the discriminant method to the histogram distribution of the derivation from median filter output. (2) Filter weights which have been learned by uncorrupted pixels and their neighborhood without the original image are used for the restoration filtering for noise corrupted pixels. The validity of the proposed method is shown through some experimental results.

  5. Mapping spatio-temporal filtering algorithms used in fluoroscopy to single core and multicore DSP architectures

    NASA Astrophysics Data System (ADS)

    Dasgupta, Udayan; Ali, Murtaza

    2011-03-01

    Low dose X-ray image sequences, as obtained in fluoroscopy, exhibit high levels of noise that must be suppressed in real-time, while preserving diagnostic structures. Multi-step adaptive filtering approaches, often involving spatio-temporal filters, are typically used to achieve this goal. In this work typical fluoroscopic image sequences, corrupted with Poisson noise, were processed using various filtering schemes. The noise suppression of the schemes was evaluated using objective image quality measures. Two adaptive spatio-temporal schemes, the first one using object detection and the second one using unsharp masking, were chosen as representative approaches for different fluoroscopy procedures and mapped on to Texas Instrument's (TI) high performance digital signal processors (DSP). The paper explains the fixed point design of these algorithms and evaluates its impact on overall system performance. The fixed point versions of these algorithms are mapped onto the C64x+TM core using instruction-level parallelism to effectively use its VLIW architecture. The overall data flow was carefully planned to reduce cache and data movement overhead, while working with large medical data sets. Apart from mapping these algorithms on to TI's single core DSP architecture, this work also distributes the operations to leverage multi-core DSP architectures. The data arrangement and flow were optimized to minimize inter-processor messaging and data movement overhead.

  6. Generic Kalman Filter Software

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E., II; Crues, Edwin Z.

    2005-01-01

    The Generic Kalman Filter (GKF) software provides a standard basis for the development of application-specific Kalman-filter programs. Historically, Kalman filters have been implemented by customized programs that must be written, coded, and debugged anew for each unique application, then tested and tuned with simulated or actual measurement data. Total development times for typical Kalman-filter application programs have ranged from months to weeks. The GKF software can simplify the development process and reduce the development time by eliminating the need to re-create the fundamental implementation of the Kalman filter for each new application. The GKF software is written in the ANSI C programming language. It contains a generic Kalman-filter-development directory that, in turn, contains a code for a generic Kalman filter function; more specifically, it contains a generically designed and generically coded implementation of linear, linearized, and extended Kalman filtering algorithms, including algorithms for state- and covariance-update and -propagation functions. The mathematical theory that underlies the algorithms is well known and has been reported extensively in the open technical literature. Also contained in the directory are a header file that defines generic Kalman-filter data structures and prototype functions and template versions of application-specific subfunction and calling navigation/estimation routine code and headers. Once the user has provided a calling routine and the required application-specific subfunctions, the application-specific Kalman-filter software can be compiled and executed immediately. During execution, the generic Kalman-filter function is called from a higher-level navigation or estimation routine that preprocesses measurement data and post-processes output data. The generic Kalman-filter function uses the aforementioned data structures and five implementation- specific subfunctions, which have been developed by the user on the basis of the aforementioned templates. The GKF software can be used to develop many different types of unfactorized Kalman filters. A developer can choose to implement either a linearized or an extended Kalman filter algorithm, without having to modify the GKF software. Control dynamics can be taken into account or neglected in the filter-dynamics model. Filter programs developed by use of the GKF software can be made to propagate equations of motion for linear or nonlinear dynamical systems that are deterministic or stochastic. In addition, filter programs can be made to operate in user-selectable "covariance analysis" and "propagation-only" modes that are useful in design and development stages.

  7. Filtering separators having filter cleaning apparatus

    SciTech Connect

    Margraf, A.

    1984-08-28

    This invention relates to filtering separators of the kind having a housing which is subdivided by a partition, provided with parallel rows of holes or slots, into a dust-laden gas space for receiving filter elements positioned in parallel rows and being impinged upon by dust-laden gas from the outside towards the inside, and a clean gas space. In addition, the housing is provided with a chamber for cleansing the filter element surfaces of a row by counterflow action while covering at the same time the partition holes or slots leading to the adjacent rows of filter elements. The chamber is arranged for the supply of compressed air to at least one injector arranged to feed compressed air and secondary air to the row of filter elements to be cleansed. The chamber is also reciprocatingly displaceable along the partition in periodic and intermittent manner. According to the invention, a surface of the chamber facing towards the partition covers at least two of the rows of holes or slots of the partition, and the chamber is closed upon itself with respect to the clean gas space, and is connected to a compressed air reservoir via a distributor pipe and a control valve. At least one of the rows of holes or slots of the partition and the respective row of filter elements in flow communication therewith are in flow communication with the discharge side of at least one injector acted upon with compressed air. At least one other row of the rows of holes or slots of the partition and the respective row of filter elements is in flow communication with the suction side of the injector.

  8. Effects of filtering parameter value on simulation results

    NASA Astrophysics Data System (ADS)

    Liu, Weiyun; McDonough, J. M.

    2013-11-01

    Aliasing is a fundamental issue in discrete solutions of the Navier-Stokes equations. It arises from under resolution of numerical approximations as occurs in large-eddy simulation and must be treated with a filter. Two approaches to filtering have been distinguished in the LES context: implicit and explicit. Implicit filtering is formally applied to governing equations without specification of a particular filter, and explicit filtering is performed on computed solutions via a prescribed filter, as in signal processing. With explicit filtering, since filtered velocities are used in subsequent time steps, the aliasing phenomenon can potentially be removed completely; we will focus on this form in the present work. Numerical filters, however, are constructed so as to allow control of the degree of aliasing via parameter values set by the user. We will demonstrate that poor choices of such parameters can result in completely non-physical, yet numerically stable, computed solutions for two widely-used filters, Padé and Shuman, for a problem having abundant experimental data for comparisons.

  9. Optically tunable optical filter

    NASA Astrophysics Data System (ADS)

    James, Robert T. B.; Wah, Christopher; Iizuka, Keigo; Shimotahira, Hiroshi

    1995-12-01

    We experimentally demonstrate an optically tunable optical filter that uses photorefractive barium titanate. With our filter we implement a spectrum analyzer at 632.8 nm with a resolution of 1.2 nm. We simulate a wavelength-division multiplexing system by separating two semiconductor laser diodes, at 1560 nm and 1578 nm, with the same filter. The filter has a bandwidth of 6.9 nm. We also use the same filter to take 2.5-nm-wide slices out of a 20-nm-wide superluminescent diode centered at 840 nm. As a result, we experimentally demonstrate a phenomenal tuning range from 632.8 to 1578 nm with a single filtering device.

  10. Concentric Split Flow Filter

    NASA Technical Reports Server (NTRS)

    Stapleton, Thomas J. (Inventor)

    2015-01-01

    A concentric split flow filter may be configured to remove odor and/or bacteria from pumped air used to collect urine and fecal waste products. For instance, filter may be designed to effectively fill the volume that was previously considered wasted surrounding the transport tube of a waste management system. The concentric split flow filter may be configured to split the air flow, with substantially half of the air flow to be treated traveling through a first bed of filter media and substantially the other half of the air flow to be treated traveling through the second bed of filter media. This split flow design reduces the air velocity by 50%. In this way, the pressure drop of filter may be reduced by as much as a factor of 4 as compare to the conventional design.

  11. Contactor/filter improvements

    DOEpatents

    Stelman, D.

    1988-06-30

    A contactor/filter arrangement for removing particulate contaminants from a gaseous stream is described. The filter includes a housing having a substantially vertically oriented granular material retention member with upstream and downstream faces, a substantially vertically oriented microporous gas filter element, wherein the retention member and the filter element are spaced apart to provide a zone for the passage of granular material therethrough. A gaseous stream containing particulate contaminants passes through the gas inlet means as well as through the upstream face of the granular material retention member, passing through the retention member, the body of granular material, the microporous gas filter element, exiting out of the gas outlet means. A cover screen isolates the filter element from contact with the moving granular bed. In one embodiment, the granular material is comprised of porous alumina impregnated with CuO, with the cover screen cleaned by the action of the moving granular material as well as by backflow pressure pulses. 6 figs.

  12. Numerical discretization for nonlinear diffusion filter

    NASA Astrophysics Data System (ADS)

    Mustaffa, I.; Mizuar, I.; Aminuddin, M. M. M.; Dasril, Y.

    2015-05-01

    Nonlinear diffusion filters are famously used in machine vision for image denoising and restoration. This paper presents a study on the effects of different numerical discretization of nonlinear diffusion filter. Several numerical discretization schemes are presented; namely semi-implicit, AOS, and fully implicit schemes. The results of these schemes are compared by visual results, objective measurement e.g. PSNR and MSE. The results are also compared to a Daubechies wavelet denoising method. It is acknowledged that the two preceding scheme have already been discussed in literature, however comparison to the latter scheme has not been made. The semi-implicit scheme uses an additive operator splitting (AOS) developed to overcome the shortcoming of the explicit scheme i.e., stability for very small time steps. Although AOS has proven to be efficient, from the nonlinear diffusion filter results with different discretization schemes, examples shows that implicit schemes are worth pursuing.

  13. Input filter compensation for switching regulators

    NASA Technical Reports Server (NTRS)

    Kelkar, S. S.; Lee, F. C.

    1983-01-01

    A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.

  14. Learning Nonlinear Spectral Filters for Color Image Reconstruction Michael Moeller1

    E-print Network

    Cremers, Daniel

    to represent the input data as the sum of image layers containing features at different scales. Filtered images the idea of learning optimal filters for the task of image denoising, and propose the idea of mixing high the optimal weights can significantly improve the results in comparison to the standard variational approach

  15. Hybrid Filter Membrane

    NASA Technical Reports Server (NTRS)

    Laicer, Castro; Rasimick, Brian; Green, Zachary

    2012-01-01

    Cabin environmental control is an important issue for a successful Moon mission. Due to the unique environment of the Moon, lunar dust control is one of the main problems that significantly diminishes the air quality inside spacecraft cabins. Therefore, this innovation was motivated by NASA s need to minimize the negative health impact that air-suspended lunar dust particles have on astronauts in spacecraft cabins. It is based on fabrication of a hybrid filter comprising nanofiber nonwoven layers coated on porous polymer membranes with uniform cylindrical pores. This design results in a high-efficiency gas particulate filter with low pressure drop and the ability to be easily regenerated to restore filtration performance. A hybrid filter was developed consisting of a porous membrane with uniform, micron-sized, cylindrical pore channels coated with a thin nanofiber layer. Compared to conventional filter media such as a high-efficiency particulate air (HEPA) filter, this filter is designed to provide high particle efficiency, low pressure drop, and the ability to be regenerated. These membranes have well-defined micron-sized pores and can be used independently as air filters with discreet particle size cut-off, or coated with nanofiber layers for filtration of ultrafine nanoscale particles. The filter consists of a thin design intended to facilitate filter regeneration by localized air pulsing. The two main features of this invention are the concept of combining a micro-engineered straight-pore membrane with nanofibers. The micro-engineered straight pore membrane can be prepared with extremely high precision. Because the resulting membrane pores are straight and not tortuous like those found in conventional filters, the pressure drop across the filter is significantly reduced. The nanofiber layer is applied as a very thin coating to enhance filtration efficiency for fine nanoscale particles. Additionally, the thin nanofiber coating is designed to promote capture of dust particles on the filter surface and to facilitate dust removal with pulse or back airflow.

  16. Uneven-order decentered Shapiro filters for boundary filtering

    NASA Astrophysics Data System (ADS)

    Falissard, F.

    2015-07-01

    This paper addresses the use of Shapiro filters for boundary filtering. A new class of uneven-order decentered Shapiro filters is proposed and compared to classical Shapiro filters and even-order decentered Shapiro filters. The theoretical analysis shows that the proposed boundary filters are more accurate than the centered Shapiro filters and more robust than the even-order decentered boundary filters usable at the same distance to the boundary. The benefit of the new boundary filters is assessed for computations using the compressible Euler equations.

  17. Filter Media Recommendation Review

    SciTech Connect

    Thompson, Robert C.; Miley, Harry S.; Arthur, Richard J.

    2002-01-07

    The original filter recommended by PNNL for the RASA is somewhat difficult to dissolve and has been discontinued by the manufacturer (3M) because the manufacturing process (substrate blown microfiber, or SBMF) has been superceded by a simpler process (scrim-free blown microfiber, or BMF). Several new potential filters have been evaluated by PNNL and by an independent commercial lab. A superior product has been identified which provides higher trapping efficiency, higher air flow, is easier to dissolve, and is thinner, accommodating more filters per RASA roll. This filter is recommended for all ground-based sampling, and with additional mechanical support, it could be useful for airborne sampling, as well.

  18. Visual Tracking & Particle Filters

    E-print Network

    LeGland, François

    -production (compositing, augmented reality, editing, re-purposing, stereo-3D authoring, motion capture for animation General case: sequential Monte Carlo approximation (particle filter) Pros: transports full distribution

  19. Nanofiber Filters Eliminate Contaminants

    NASA Technical Reports Server (NTRS)

    2009-01-01

    With support from Phase I and II SBIR funding from Johnson Space Center, Argonide Corporation of Sanford, Florida tested and developed its proprietary nanofiber water filter media. Capable of removing more than 99.99 percent of dangerous particles like bacteria, viruses, and parasites, the media was incorporated into the company's commercial NanoCeram water filter, an inductee into the Space Foundation's Space Technology Hall of Fame. In addition to its drinking water filters, Argonide now produces large-scale nanofiber filters used as part of the reverse osmosis process for industrial water purification.

  20. Birefringent filter design

    NASA Technical Reports Server (NTRS)

    Bair, Clayton H. (inventor)

    1991-01-01

    A birefringent filter is provided for tuning the wavelength of a broad band emission laser. The filter comprises thin plates of a birefringent material having thicknesses which are non-unity, integral multiples of the difference between the thicknesses of the two thinnest plates. The resulting wavelength selectivity is substantially equivalent to the wavelength selectivity of a conventional filter which has a thinnest plate having a thickness equal to this thickness difference. The present invention obtains an acceptable tuning of the wavelength while avoiding a decrease in optical quality associated with conventional filters wherein the respective plate thicknesses are integral multiples of the thinnest plate.

  1. Tunable Microwave Filter Design Using Thin-Film Ferroelectric Varactors

    NASA Astrophysics Data System (ADS)

    Haridasan, Vrinda

    Military, space, and consumer-based communication markets alike are moving towards multi-functional, multi-mode, and portable transceiver units. Ferroelectric-based tunable filter designs in RF front-ends are a relatively new area of research that provides a potential solution to support wideband and compact transceiver units. This work presents design methodologies developed to optimize a tunable filter design for system-level integration, and to improve the performance of a ferroelectric-based tunable bandpass filter. An investigative approach to find the origins of high insertion loss exhibited by these filters is also undertaken. A system-aware design guideline and figure of merit for ferroelectric-based tunable band- pass filters is developed. The guideline does not constrain the filter bandwidth as long as it falls within the range of the analog bandwidth of a system's analog to digital converter. A figure of merit (FOM) that optimizes filter design for a specific application is presented. It considers the worst-case filter performance parameters and a tuning sensitivity term that captures the relation between frequency tunability and the underlying material tunability. A non-tunable parasitic fringe capacitance associated with ferroelectric-based planar capacitors is confirmed by simulated and measured results. The fringe capacitance is an appreciable proportion of the tunable capacitance at frequencies of X-band and higher. As ferroelectric-based tunable capac- itors form tunable resonators in the filter design, a proportionally higher fringe capacitance reduces the capacitance tunability which in turn reduces the frequency tunability of the filter. Methods to reduce the fringe capacitance can thus increase frequency tunability or indirectly reduce the filter insertion-loss by trading off the increased tunability achieved to lower loss. A new two-pole tunable filter topology with high frequency tunability (> 30%), steep filter skirts, wide stopband rejection, and constant bandwidth is designed, simulated, fabricated and measured. The filters are fabricated using barium strontium titanate (BST) varactors. Electromagnetic simulations and measured results of the tunable two-pole ferroelectric filter are analyzed to explore the origins of high insertion loss in ferroelectric filters. The results indicate that the high-permittivity of the BST (a ferroelectric) not only makes the filters tunable and compact, but also increases the conductive loss of the ferroelectric-based tunable resonators which translates into high insertion loss in ferroelectric filters.

  2. Filter holder and gasket assembly for candle or tube filters

    DOEpatents

    Lippert, Thomas Edwin (Murrysville, PA); Alvin, Mary Anne (Pittsburgh, PA); Bruck, Gerald Joseph (Murrysville, PA); Smeltzer, Eugene E. (Export, PA)

    1999-03-02

    A filter holder and gasket assembly for holding a candle filter element within a hot gas cleanup system pressure vessel. The filter holder and gasket assembly includes a filter housing, an annular spacer ring securely attached within the filter housing, a gasket sock, a top gasket, a middle gasket and a cast nut.

  3. Filter holder and gasket assembly for candle or tube filters

    DOEpatents

    Lippert, T.E.; Alvin, M.A.; Bruck, G.J.; Smeltzer, E.E.

    1999-03-02

    A filter holder and gasket assembly are disclosed for holding a candle filter element within a hot gas cleanup system pressure vessel. The filter holder and gasket assembly includes a filter housing, an annular spacer ring securely attached within the filter housing, a gasket sock, a top gasket, a middle gasket and a cast nut. 9 figs.

  4. A Surrogate Management Framework Using Rigorous Trust-Region Steps

    E-print Network

    Vicente, Luís Nunes

    A Surrogate Management Framework Using Rigorous Trust-Region Steps S. Gratton L. N. Vicente August 22, 2012 Abstract Surrogate models are frequently used in the optimization engi- neering community: Surrogate modeling, trust-region methods, search step, global convergence. 1 Introduction Engineers

  5. Baby Steps: How "Less is More" in Unsupervised Dependency Parsing

    E-print Network

    Jurafsky, Daniel

    Baby Steps: How "Less is More" in Unsupervised Dependency Parsing Valentin I. Spitkovsky Computer grammar induction. Both are based on Klein and Manning's Dependency Model with Valence. The first, Baby to more intricate models and advanced algorithms. 2 Baby Steps Global non-convex optimization is hard [11

  6. Stable Kalman filters for processing clock measurement data

    NASA Technical Reports Server (NTRS)

    Clements, P. A.; Gibbs, B. P.; Vandergraft, J. S.

    1989-01-01

    Kalman filters have been used for some time to process clock measurement data. Due to instabilities in the standard Kalman filter algorithms, the results have been unreliable and difficult to obtain. During the past several years, stable forms of the Kalman filter have been developed, implemented, and used in many diverse applications. These algorithms, while algebraically equivalent to the standard Kalman filter, exhibit excellent numerical properties. Two of these stable algorithms, the Upper triangular-Diagonal (UD) filter and the Square Root Information Filter (SRIF), have been implemented to replace the standard Kalman filter used to process data from the Deep Space Network (DSN) hydrogen maser clocks. The data are time offsets between the clocks in the DSN, the timescale at the National Institute of Standards and Technology (NIST), and two geographically intermediate clocks. The measurements are made by using the GPS navigation satellites in mutual view between clocks. The filter programs allow the user to easily modify the clock models, the GPS satellite dependent biases, and the random noise levels in order to compare different modeling assumptions. The results of this study show the usefulness of such software for processing clock data. The UD filter is indeed a stable, efficient, and flexible method for obtaining optimal estimates of clock offsets, offset rates, and drift rates. A brief overview of the UD filter is also given.

  7. Extended range harmonic filter

    NASA Technical Reports Server (NTRS)

    Jankowski, H.; Geia, A. J.; Allen, C. C.

    1973-01-01

    Two types of filters, leaky-wall and open-guide, are combined into single component. Combination gives 10 db or greater additional attenuation to fourth and higher harmonics, at expense of increasing loss of fundamental frequency by perhaps 0.05 to 0.08 db. Filter is applicable to all high power microwave transmitters, but is especially desirable for satellite transmitters.

  8. Superconducting microwave filter

    SciTech Connect

    Kommrusch, R.S.

    1991-02-26

    This patent describes improvement in a microwave cavity filter comprised of an evacuated substantially cylindrical housing with an RF input terminal and an RF output terminal, the cavity filter preferentially coupling RF energy at least one preferred frequency to the RF output terminal.

  9. Band-elimination filter

    NASA Technical Reports Server (NTRS)

    Shelton, G. B.

    1977-01-01

    Helical resonator is employed to produce stable, highly selective filter. Other features of filter include controlled bandwidth by cascading identical stages and stagger tuning, adjustable notch depth, good isolation between stages, gain set by proper choice of resistors, and elimination of spurious responses.

  10. Tunable acoustical optical filter

    NASA Technical Reports Server (NTRS)

    Lane, A. L.

    1977-01-01

    Solid state filter with active crystal element increases sensitivity and resolution of passive and active spectrometers. Filter is capable of ranging through infrared and visible spectra, can be built as portable device for field use, and is suitable for ecological surveying, for pollution detection, and for pollutant classification.

  11. Stepped inlet optical panel

    DOEpatents

    Veligdan, James T. (6 Stephanie La., Manorville, NY 11949)

    2001-01-01

    An optical panel includes stacked optical waveguides having stepped inlet facets collectively defining an inlet face for receiving image light, and having beveled outlet faces collectively defining a display screen for displaying the image light channeled through the waveguides by internal reflection.

  12. STEP electronic system design

    NASA Technical Reports Server (NTRS)

    Couch, R. H.; Johnson, J. W.

    1984-01-01

    The STEP electronic system design is discussed. The purpose of the design is outlined. The electronic system design is summarized and it is found that: an effective conceptual system design is developed; the design represents a unique set of capabilities; makes efficient use of available orbiter resources; the system capabilities exceed identified potential experiment needs.

  13. CONVERGING RCC STEPPED SPILLWAYS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To meet current dam safety requirements, roller compacted concrete (RCC) stepped spillways has become a popular choice is dam rehabilitation. In many cases, urbanization has changed the hazard classification of these aging watershed structures, and land rights are often not obtainable for widening ...

  14. Spectral diagonal ensemble Kalman filters

    NASA Astrophysics Data System (ADS)

    Kasanický, I.; Mandel, J.; Vejmelka, M.

    2015-08-01

    A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the approximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.

  15. Spectral diagonal ensemble Kalman filters

    E-print Network

    Kasanický, Ivan; Vejmelka, Martin

    2015-01-01

    A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields, which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.

  16. A TIME-VARYING KALMAN FILTER APPLIED TO MOVING TARGET TRACKING Nicolas Obolensky, Deniz Erdogmus, Jose C. Principe

    E-print Network

    Slatton, Clint

    A TIME-VARYING KALMAN FILTER APPLIED TO MOVING TARGET TRACKING Nicolas Obolensky, Deniz Erdogmus, obolensk@cnel.ufl.edu Abstract: In this paper, we describe a time varying extension to the Kalman Filter to be inaccurate. The proposed Kalman Filter adapts its model (i.e. the state transition matrix) at each step

  17. Sub-micron filter

    SciTech Connect

    Tepper, Frederick; Kaledin, Leonid

    2009-10-13

    Aluminum hydroxide fibers approximately 2 nanometers in diameter and with surface areas ranging from 200 to 650 m.sup.2/g have been found to be highly electropositive. When dispersed in water they are able to attach to and retain electronegative particles. When combined into a composite filter with other fibers or particles they can filter bacteria and nano size particulates such as viruses and colloidal particles at high flux through the filter. Such filters can be used for purification and sterilization of water, biological, medical and pharmaceutical fluids, and as a collector/concentrator for detection and assay of microbes and viruses. The alumina fibers are also capable of filtering sub-micron inorganic and metallic particles to produce ultra pure water. The fibers are suitable as a substrate for growth of cells. Macromolecules such as proteins may be separated from each other based on their electronegative charges.

  18. Sintered composite filter

    DOEpatents

    Bergman, W.

    1986-05-02

    A particulate filter medium formed of a sintered composite of 0.5 micron diameter quartz fibers and 2 micron diameter stainless steel fibers is described. Preferred composition is about 40 vol.% quartz and about 60 vol.% stainless steel fibers. The media is sintered at about 1100/sup 0/C to bond the stainless steel fibers into a cage network which holds the quartz fibers. High filter efficiency and low flow resistance are provided by the smaller quartz fibers. High strength is provided by the stainless steel fibers. The resulting media has a high efficiency and low pressure drop similar to the standard HEPA media, with tensile strength at least four times greater, and a maximum operating temperature of about 550/sup 0/C. The invention also includes methods to form the composite media and a HEPA filter utilizing the composite media. The filter media can be used to filter particles in both liquids and gases.

  19. BIREFRINGENT FILTER MODEL

    NASA Technical Reports Server (NTRS)

    Cross, P. L.

    1994-01-01

    Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.

  20. Stabilized BFGS approximate Kalman filter

    E-print Network

    Bibov, Alexander

    The Kalman filter (KF) and Extended Kalman filter (EKF) are well-known tools for assimilating data and model predictions. The filters require storage and multiplication of n × n and n × m matrices and inversion of m × m ...

  1. Kalman Filtering with Intermittent Observations

    E-print Network

    Jordan, Michael I.

    1 Kalman Filtering with Intermittent Observations Bruno Sinopoli, Luca Schenato, Massimo within sensor networks, we consider the prob- lem of performing Kalman filtering with intermittent be neglected. We address this problem starting from the discrete Kalman filtering formulation, and modelling

  2. Overhead robot system for remote HEPA filter replacement

    SciTech Connect

    Wiesener, R.W.

    1988-01-01

    A high-efficiency particulate air (HEPA) filter system for facility exhaust air filtraction of radioactive particles has been designed that utilizes a modified industrial gantry robot to remotely replace filter elements. The system filtration design capacity can be readily changed by increasing or decreasing the number of plenums, which only affects the cell length and robot bridge travel. The parallel flow plenum design incorporates remote HEPA filter housings, which are commercially available. Filter removal and replacement is accomplished with the robot under sequenced program control. A custom-designed robot control console, which interfaces with the standard gantry robot power center controller, minimizes operator training. Critical sequence steps are operator verified, using closed-circuit television (CCTV), before proceeding to the next programmed stop point. The robot can be operated in a teleoperator mode to perform unstructured maintenance tasks, such as replacing filter housing components and cell lights.

  3. Cermet Filters To Reduce Diesel Engine Emissions

    SciTech Connect

    Kong, Peter

    2001-08-05

    Pollution from diesel engines is a significant part of our nation's air-quality problem. Even under the more stringent standards for heavy-duty engines set to take effect in 2004, these engines will continue to emit large amounts of nitrogen oxides and particulate matter, both of which affect public health. To address this problem, the Idaho National Engineering and Environmental Laboratory (INEEL) invented a self-cleaning, high temperature, cermet filter that reduces heavy-duty diesel engine emissions. The main advantage of the INEEL cermet filter, compared to current technology, is its ability to destroy carbon particles and NOx in diesel engine exhaust. As a result, this technology is expected to improve our nation's environmental quality by meeting the need for heavy-duty diesel engine emissions control. This paper describes the cermet filter technology and the initial research and development effort.Diesel engines currently emit soot and NOx that pollute our air. It is expected that the U.S. Environmental Protection Agency (EPA) will begin tightening the regulatory requirements to control these emissions. The INEEL's self-cleaning, high temperature cermet filter provides a technology to clean heavy-duty diesel engine emissions. Under high engine exhaust temperatures, the cermet filter simultaneously removes carbon particles and NOx from the exhaust gas. The cermet filter is made from inexpensive starting materials, via net shape bulk forming and a single-step combustion synthesis process, and can be brazed to existing structures. It is self-cleaning, lightweight, mechanically strong, thermal shock resistant, and has a high melting temperature, high heat capacity, and controllable thermal expansion coefficient. The filter's porosity is controlled to provide high removal efficiency for carbon particulate. It can be made catalytic to oxidize CO, H2, and hydrocarbons, and reduce NOx. When activated by engine exhaust, the filter produces NH3 and light hydrocarbon gases that can effectively destroy the NOx in the exhaust. The following sections describe cermet filter technology and properties of the INEEL filter.

  4. Speckle filtering of medical ultrasonic images using wavelet and guided filter.

    PubMed

    Zhang, Ju; Lin, Guangkuo; Wu, Lili; Cheng, Yun

    2016-02-01

    Speckle noise is an inherent yet ineffectual residual artifact in medical ultrasound images, which significantly degrades quality and restricts accuracy in automatic diagnostic techniques. Speckle reduction is therefore an important step prior to the analysis and processing of the ultrasound images. A new de-noising method based on an improved wavelet filter and guided filter is proposed in this paper. According to the characteristics of medical ultrasound images in the wavelet domain, an improved threshold function based on the universal wavelet threshold function is developed. The wavelet coefficients of speckle noise and noise-free signal are modeled as Rayleigh distribution and generalized Gaussian distribution respectively. The Bayesian maximum a posteriori estimation is applied to obtain a new wavelet shrinkage algorithm. The coefficients of the low frequency sub-band in the wavelet domain are filtered by guided filter. The filtered image is then obtained by using the inverse wavelet transformation. Experiments with the comparison of the other seven de-speckling filters are conducted. The results show that the proposed method not only has a strong de-speckling ability, but also keeps the image details, such as the edge of a lesion. PMID:26489484

  5. Efficient Bayesian updating with PCE-based particle filters based on polynomial chaos expansion and CO2 storage

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Class, H.; Helmig, R.; Nowak, W.

    2011-12-01

    Underground flow systems, such as oil or gas reservoirs and CO2 storage sites, are an important and challenging class of complex dynamic systems. Lacking information about distributed systems properties (such as porosity, permeability,...) leads to model uncertainties up to a level where quantification of uncertainties may become the dominant question in application tasks. History matching to past production data becomes an extremely important issue in order to improve the confidence of prediction. The accuracy of history matching depends on the quality of the established physical model (including, e.g. seismic, geological and hydrodynamic characteristics, fluid properties etc). The history matching procedure itself is very time consuming from the computational point of view. Even one single forward deterministic simulation may require parallel high-performance computing. This fact makes a brute-force non-linear optimization approach not feasible, especially for large-scale simulations. We present a novel framework for history matching which takes into consideration the nonlinearity of the model and of inversion, and provides a cheap but highly accurate tool for reducing prediction uncertainty. We propose an advanced framework for history matching based on the polynomial chaos expansion (PCE). Our framework reduces complex reservoir models and consists of two main steps. In step one, the original model is projected onto a so-called integrative response surface via very recent PCE technique. This projection is totally non-intrusive (following a probabilistic collocation method) and optimally constructed for available reservoir data at the prior stage of Bayesian updating. The integrative response surface keeps the nonlinearity of the initial model at high order and incorporates all suitable parameters, such as uncertain parameters (porosity, permeability etc.) and design or control variables (injection rate, depth etc.). Technically, the computational costs for constructing the response surface depend on the number of parameters and the expansion degree. Step two consists of Bayesian updating in order to match the reduced model to available measurements of state variables or other past or real-time observations of system behavior (e.g. past production data or pressure at monitoring wells during a certain time period). In step 2 we apply particle filtering on the integrative response surface constructed at step one. Particle filtering is a strong technique for Bayesian updating which takes into consideration the nonlinearity of inverse problem in history matching more accurately than Ensemble Kalman filter do. Thanks to the computational efficiency of PCE and integrative response surface, Bayesian updating for history matching becomes an interactive task and can incorporate real time measurements.

  6. Ceramic fiber reinforced filter

    DOEpatents

    Stinton, David P. (Knoxville, TN); McLaughlin, Jerry C. (Oak Ridge, TN); Lowden, Richard A. (Powell, TN)

    1991-01-01

    A filter for removing particulate matter from high temperature flowing fluids, and in particular gases, that is reinforced with ceramic fibers. The filter has a ceramic base fiber material in the form of a fabric, felt, paper of the like, with the refractory fibers thereof coated with a thin layer of a protective and bonding refractory applied by chemical vapor deposition techniques. This coating causes each fiber to be physically joined to adjoining fibers so as to prevent movement of the fibers during use and to increase the strength and toughness of the composite filter. Further, the coating can be selected to minimize any reactions between the constituents of the fluids and the fibers. A description is given of the formation of a composite filter using a felt preform of commercial silicon carbide fibers together with the coating of these fibers with pure silicon carbide. Filter efficiency approaching 100% has been demonstrated with these filters. The fiber base material is alternately made from aluminosilicate fibers, zirconia fibers and alumina fibers. Coating with Al.sub.2 O.sub.3 is also described. Advanced configurations for the composite filter are suggested.

  7. Fourier plane filters

    NASA Technical Reports Server (NTRS)

    Oliver, D. S.; Aldrich, R. E.; Krol, F. T.

    1972-01-01

    An electrically addressed liquid crystal Fourier plane filter capable of real time optical image processing is described. The filter consists of two parts: a wedge filter having forty 9 deg segments and a ring filter having twenty concentric rings in a one inch diameter active area. Transmission of the filter in the off (transparent) state exceeds fifty percent. By using polarizing optics, contrast as high as 10,000:1 can be achieved at voltages compatible with FET switching technology. A phenomenological model for the dynamic scattering is presented for this special case. The filter is designed to be operated from a computer and is addressed by a seven bit binary word which includes an on or off command and selects any one of the twenty rings or twenty wedge pairs. The overall system uses addressable latches so that once an element is in a specified state, it will remain there until a change of state command is received. The drive for the liquid crystal filter is ? 30 V peak at 30 Hz to 70 Hz. These parameters give a rise time for the scattering of 20 msec and a decay time of 80 to 100 msec.

  8. Are reconstruction filters necessary?

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    2006-05-01

    Shannon's sampling theorem (also called the Shannon-Whittaker-Kotel'nikov theorem) was developed for the digitization and reconstruction of sinusoids. Strict adherence is required when frequency preservation is important. Three conditions must be met to satisfy the sampling theorem: (1) The signal must be band-limited, (2) the digitizer must sample the signal at an adequate rate, and (3) a low-pass reconstruction filter must be present. In an imaging system, the signal is band-limited by the optics. For most imaging systems, the signal is not adequately sampled resulting in aliasing. While the aliasing seems excessive mathematically, it does not significantly affect the perceived image. The human visual system detects intensity differences, spatial differences (shapes), and color differences. The eye is less sensitive to frequency effects and therefore sampling artifacts have become quite acceptable. Indeed, we love our television even though it is significantly undersampled. The reconstruction filter, although absolutely essential, is rarely discussed. It converts digital data (which we cannot see) into a viewable analog signal. There are several reconstruction filters: electronic low-pass filters, the display media (monitor, laser printer), and your eye. These are often used in combination to create a perceived continuous image. Each filter modifies the MTF in a unique manner. Therefore image quality and system performance depends upon the reconstruction filter(s) used. The selection depends upon the application.

  9. Micromachine Wedge Stepping Motor

    SciTech Connect

    Allen, J.J.; Schriner, H.K.

    1998-11-04

    A wedge stepping motor, which will index a mechanism, has been designed and fabricated in the surface rnicromachine SUMMiT process. This device has demonstrated the ability to index one gear tooth at a time with speeds up to 205 teeth/see. The wedge stepper motor has the following features, whi:h will be useful in a number of applications. o The ability to precisely position mechanical components. . Simple pulse signals can be used for operation. o Only 2 drive signals are requixed for operation. o Torque and precision capabilities increase with device size . The device to be indexed is restrained at all times by the wedge shaped tooth that is used for actuation. This paper will discuss the theory of operation and desi=m of the wedge stepping motor. The fabrication and testing of I he device will also be presented.

  10. Reduced Kalman Filters for Clock Ensembles

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles A.

    2011-01-01

    This paper summarizes the author's work ontimescales based on Kalman filters that act upon the clock comparisons. The natural Kalman timescale algorithm tends to optimize long-term timescale stability at the expense of short-term stability. By subjecting each post-measurement error covariance matrix to a non-transparent reduction operation, one obtains corrected clocks with improved short-term stability and little sacrifice of long-term stability.

  11. Fuzzy rank LUM filters.

    PubMed

    Nie, Yao; Barner, Kenneth E

    2006-12-01

    The rank information of samples is widely utilized in nonlinear signal processing algorithms. Recently developed fuzzy transformation theory introduces the concept of fuzzy ranks, which incorporates sample spread (or sample diversity) information into the sample ranking framework. Thus, the fuzzy rank reflects a sample's rank, as well as its similarity to the other sample (namely, joint rank order and spread), and can be utilized to improve the performance of the conventional rank-order-based filters. In this paper, the well-known lower-upper-middle (LUM) filters are generalized utilizing the fuzzy ranks, yielding the class of fuzzy rank LUM (F-LUM) filters. Statistical and deterministic properties of the F-LUM filters are derived, showing that the F-LUM smoothers have similar impulsive noise removal capability to the LUM smoothers, while preserving the image details better. The F-LUM sharpeners are capable of enhancing strong edges while simultaneously preserving small variations. The performance of the F-LUM filters are evaluated for the problems of image impulsive noise removal, sharpening and edge-detection preprocessing. The experimental results show that the F-LUM smoothers can achieve a better tradeoff between noise removal and detail preservation than the LUM smoothers. The F-LUM sharpeners are capable of sharpening the image edges without amplifying the noise or distorting the fine details. The joint smoothing and sharpening operation of the general F-LUM filters also showed superiority in edge detection preprocessing application. In conclusion, the simplicity and versatility of the F-LUM filters and their advantages over the conventional LUM filters are desirable in many practical applications. This also shows that utilizing fuzzy ranks in filter generalization is a promising methodology. PMID:17153940

  12. Production of reference sources of radioactive aerosols in filters for proficiency testing.

    PubMed

    Monsanglant-Louvet, C; Osmond, M; Ferreux, L; Liatimi, N; Maulard, A; Picolo, J L; Marcillaud, B; Gensdarmes, F

    2014-10-01

    In the framework of the organization of proficiency testing, filters with deposits of (137)Cs and (90)Sr+(90)Y radioactive aerosols have been submitted to laboratories for radionuclide measurement. Procedures for the special preparation and characterization of filters have been developed. The different steps of filter preparation, determination of the deposited radionuclide activity and characterization of the homogeneity of these deposits are presented. This method of filter preparation can also be used in the production of secondary standards, whose properties are more adapted to the needs of laboratories measuring radioactivity in filters than are the solid sources that they typically use. PMID:25464171

  13. Texture classification of normal tissues in computed tomography using Gabor filters

    NASA Astrophysics Data System (ADS)

    Dettori, Lucia; Bashir, Alia; Hasemann, Julie

    2007-03-01

    The research presented in this article is aimed at developing an automated imaging system for classification of normal tissues in medical images obtained from Computed Tomography (CT) scans. Texture features based on a bank of Gabor filters are used to classify the following tissues of interests: liver, spleen, kidney, aorta, trabecular bone, lung, muscle, IP fat, and SQ fat. The approach consists of three steps: convolution of the regions of interest with a bank of 32 Gabor filters (4 frequencies and 8 orientations), extraction of two Gabor texture features per filter (mean and standard deviation), and creation of a Classification and Regression Tree-based classifier that automatically identifies the various tissues. The data set used consists of approximately 1000 DIACOM images from normal chest and abdominal CT scans of five patients. The regions of interest were labeled by expert radiologists. Optimal trees were generated using two techniques: 10-fold cross-validation and splitting of the data set into a training and a testing set. In both cases, perfect classification rules were obtained provided enough images were available for training (~65%). All performance measures (sensitivity, specificity, precision, and accuracy) for all regions of interest were at 100%. This significantly improves previous results that used Wavelet, Ridgelet, and Curvelet texture features, yielding accuracy values in the 85%-98% range The Gabor filters' ability to isolate features at different frequencies and orientations allows for a multi-resolution analysis of texture essential when dealing with, at times, very subtle differences in the texture of tissues in CT scans.

  14. Late Wash Filter Demonstration Unit program plan

    SciTech Connect

    Nash, C.A.; Budenstein, S.A.; Boersma, M.D.

    1992-11-10

    This report details a planned a non-radioactive engineering demonstration of the DWPF Late Wash Facility (LWF) for washing salt precipitate feed, and of the In-Tank Precipitate (ITP) filters. The scale will be 0.05 to 0.1, with some larger components, prototypical instruments, and full-length filter elements. Precipitate slurry for late wash tests will be fully irradiated (3EO8 rads). Program needs and objectives are to demonstrate LWF design, optimize LWF process operations including filter cleaning and benzene sparging, test actual instruments including benzene and nitrite monitors, and test advanced design concepts such as etched filters. In addition, the Late Wash Filter Demonstration Unit (LWFDU) will support the operation and long-term improvement of ITP filtration. The expected cost of the LWFDU is $1.8 million. Operating costs in FY 1993 are expected to be $1.0 million. Testing is expected to begin 3QFY93, with LWF design confirmation and LWF operations bases completed by the end of 1QFY94.

  15. Optimal conditions for hybridization with oligonucleotides: a study with myc-oncogene DNA probes

    SciTech Connect

    Albretsen, C.; Haukanes, B.I.; Aasland, R.; Kleppe, K.

    1988-04-01

    The authors present a study on the refinement of filter-hybridization conditions for a series of synthetic oligonucleotides in the range from 17 to 50 base residues in length. Experimental conditions for hybridization and the subsequent washing steps of the filter were optimized for different lengths of the synthetic oligonucleotides by varying the formamide concentration and washing conditions. Target DNA was immobilized to the nitrocellulose filter with the slot blot technique. The sequences of the synthetic oligonucleotides are derived from the third exon of the human oncogene c-myc and the corresponding viral gene v-myc and the G+C content was between 43 and 47%. Optimal conditions for hybridization with a 82% homologous 30-mer and 100% homologous 17-, 20-, 25-, 30-, and 50-mers were found to be a concentration of formamide of 15, 15, 30, 30, 40, and 50%, respectively. The melting temperature for these optimal hybridization and washing conditions was calculated to be up to 11/sup 0/C below the hybridization temperature actually used. This confirms that the duplexes are more stable than expected. The melting points for 17-, 20-, and 30-mers were measured in the presence of 5x SSC and found to be 43, 58, and 60/sup 0/C, respectively. Competition between double- and single-stranded DNA probes to the target DNA was investigated. The single-stranded DNA probes were about 30- to 40-fold more sensitive than the double-stranded DNA probes.

  16. Multilevel filtering elliptic preconditioners

    NASA Technical Reports Server (NTRS)

    Kuo, C. C. Jay; Chan, Tony F.; Tong, Charles

    1989-01-01

    A class of preconditioners is presented for elliptic problems built on ideas borrowed from the digital filtering theory and implemented on a multilevel grid structure. They are designed to be both rapidly convergent and highly parallelizable. The digital filtering viewpoint allows the use of filter design techniques for constructing elliptic preconditioners and also provides an alternative framework for understanding several other recently proposed multilevel preconditioners. Numerical results are presented to assess the convergence behavior of the new methods and to compare them with other preconditioners of multilevel type, including the usual multigrid method as preconditioner, the hierarchical basis method and a recent method proposed by Bramble-Pasciak-Xu.

  17. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  18. Random filtering structure-based compressive sensing radar

    NASA Astrophysics Data System (ADS)

    Zhang, Jindong; Ban, YangYang; Zhu, Daiyin; Zhang, Gong

    2014-12-01

    Recently with an emerging theory of `compressive sensing' (CS), a radically new concept of compressive sensing radar (CSR) has been proposed in which the time-frequency plane is discretized into a grid. Random filtering is an interesting technique for efficiently acquiring signals in CS theory and can be seen as a linear time-invariant filter followed by decimation. In this paper, random filtering structure-based CSR system is investigated. Note that the sparse representation and sensing matrices are required to be as incoherent as possible; the methods for optimizing the transmit waveform and the FIR filter in the sensing matrix separately and simultaneously are presented to decrease the coherence between different target responses. Simulation results show that our optimized results lead to smaller coherence, with higher sparsity and better recovery accuracy observed in the CSR system than the nonoptimized transmit waveform and sensing matrix.

  19. Simplification of the Kalman filter for meteorological data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1991-01-01

    The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.

  20. Wavelet filtering for data recovery

    NASA Astrophysics Data System (ADS)

    Schmidt, W.

    2013-09-01

    In case of electrical wave measurements in space instruments, digital filtering and data compression on board can significantly enhance the signal and reduce the amount of data to be transferred to Earth. While often the instrument's transfer function is well known making the application of an optimized wavelet algorithm feasible the computational power requirements may be prohibitive as normally complex floating point operations are needed. This article presents a simplified possibility implemented in low-power 16-bit integer processors used for plasma wave measurements in the SPEDE instrument on SMART-1 and for the Permittivity Probe measurements of the SESAME/PP instrument in Rosetta's Philae Lander on its way to comet 67P/Churyumov-Gerasimenko.

  1. Spatial filters for shape control

    NASA Technical Reports Server (NTRS)

    Lindner, Douglas K.; Reichard, Karl M.

    1992-01-01

    Recently there has emerged a new class of sensors, called spatial filters, for structures which respond over a significant gauge length. Examples include piezoelectric laminate PVDF film, modal domain optical fiber sensors, and holographic sensors. These sensors have a unique capability in that they can be fabricated to locally alter their sensitivity to the measurand. In this paper we discuss how these sensors can be used for the implementation of control algorithms for the suppression of acoustic radiation from flexible structures. Based on this relationship between the total power radiated to the far field to the modal velocities of the structure, we show how the sensor placement to optimize the control algorithm to suppress the radiated power.

  2. Minimum uncertainty filters for pulses

    SciTech Connect

    Trantham, E.C. )

    1993-06-01

    The objective of this paper is to calculate filters with a minimum uncertainty, the product of filter length and bandwidth. The method is applicable to producing minimum uncertainly filters with time or frequency domain constraints on the filter. The calculus of variations is used to derive the conditions that minimize a filter's uncertainly. The general solution is a linear combination of Hermite functions, where the Hermite functions are summed from low to high order until the filter's constraints are met. Filters constrained to have zero amplitude at zero hertz have an uncertainty at least three times greater than expected from the uncertainty principle, and the minimum uncertainty filter is a first derivative Gaussian. For the previous filter, the minimum uncertainty high cut filter is a Gaussian function of frequency, but the minimum uncertainty low cut filter is a linear function of frequency.

  3. Remotely serviced filter and housing

    DOEpatents

    Ross, Maurice J. (Pocatello, ID); Zaladonis, Larry A. (Idaho Falls, ID)

    1988-09-27

    A filter system for a hot cell comprises a housing adapted for input of air or other gas to be filtered, flow of the air through a filter element, and exit of filtered air. The housing is tapered at the top to make it easy to insert a filter cartridge using an overhead crane. The filter cartridge holds the filter element while the air or other gas is passed through the filter element. Captive bolts in trunnion nuts are readily operated by electromechanical manipulators operating power wrenches to secure and release the filter cartridge. The filter cartridge is adapted to make it easy to change a filter element by using a master-slave manipulator at a shielded window station.

  4. Distributed SLAM Using Improved Particle Filter for Mobile Robot Localization

    PubMed Central

    Pei, Fujun; Wu, Mei; Zhang, Simin

    2014-01-01

    The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362

  5. RSTFC: A Novel Algorithm for Spatio-Temporal Filtering and Classification of Single-Trial EEG.

    PubMed

    Qi, Feifei; Li, Yuanqing; Wu, Wei

    2015-12-01

    Learning optimal spatio-temporal filters is a key to feature extraction for single-trial electroencephalogram (EEG) classification. The challenges are controlling the complexity of the learning algorithm so as to alleviate the curse of dimensionality and attaining computational efficiency to facilitate online applications, e.g., brain-computer interfaces (BCIs). To tackle these barriers, this paper presents a novel algorithm, termed regularized spatio-temporal filtering and classification (RSTFC), for single-trial EEG classification. RSTFC consists of two modules. In the feature extraction module, an l2 -regularized algorithm is developed for supervised spatio-temporal filtering of the EEG signals. Unlike the existing supervised spatio-temporal filter optimization algorithms, the developed algorithm can simultaneously optimize spatial and high-order temporal filters in an eigenvalue decomposition framework and thus be implemented highly efficiently. In the classification module, a convex optimization algorithm for sparse Fisher linear discriminant analysis is proposed for simultaneous feature selection and classification of the typically high-dimensional spatio-temporally filtered signals. The effectiveness of RSTFC is demonstrated by comparing it with several state-of-the-arts methods on three brain-computer interface (BCI) competition data sets collected from 17 subjects. Results indicate that RSTFC yields significantly higher classification accuracies than the competing methods. This paper also discusses the advantage of optimizing channel-specific temporal filters over optimizing a temporal filter common to all channels. PMID:25730834

  6. 11.10 Filter Banks What Are Filter Banks?

    E-print Network

    Fowler, Mark

    1/7 11.10 Filter Banks #12;2/7 What Are Filter Banks? Often need to slice up a "wideband" signal into various "subbands" Figure from Porat's Book #12;3/7 Filter Banks Application: Cell Phone Basestation FDMA Converter & ADC Filter Bank Demod Demod Demod Antenna ... ...User 1 User M ff2f1 1 GHz ... ... User 1

  7. SPAR-H Step-by-Step Guidance

    SciTech Connect

    W. J. Galyean; A. M. Whaley; D. L. Kelly; R. L. Boring

    2011-05-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  8. Mercury speciation by high-performance liquid chromatography atomic fluorescence spectrometry using an integrated microwave/UV interface. Optimization of a single step procedure for the simultaneous photo-oxidation of mercury species and photo-generation of Hg0

    NASA Astrophysics Data System (ADS)

    de Quadros, Daiane P. C.; Campanella, Beatrice; Onor, Massimo; Bramanti, Emilia; Borges, Daniel L. G.; D'Ulivo, Alessandro

    2014-11-01

    We described the hyphenation of photo-induced chemical vapor generation with high performance liquid chromatography-atomic fluorescence spectrometry (HPLC-AFS) for the quantification of inorganic mercury, methylmercury (MeHg) and ethylmercury (EtHg). In the developed procedure, formic acid in mobile phase was used for the photodecomposition of organomercury compounds and reduction of Hg2 + to mercury vapor under microwave/ultraviolet (MW/UV) irradiation. We optimized the proposed method studying the influence of several operating parameters, including the type of organic acid and its concentration, MW power, composition of HPLC mobile phase and catalytic action of TiO2 nanoparticles. Under the optimized conditions, the limits of detection were 0.15, 0.15 and 0.35 ?g L- 1 for inorganic mercury, MeHg and EtHg, respectively. The developed method was validated by determination of the main analytical figures of merit and applied to the analysis of three certified reference materials. The online interfacing of liquid chromatography with photochemical-vapor generation-atomic fluorescence for mercury determination is simple, environmentally friendly, and represents an attractive alternative to the conventional tetrahydroborate (THB) system.

  9. Cryogenic coaxial microwave filters

    E-print Network

    Tancredi, G; Meeson, P J

    2014-01-01

    At millikelvin temperatures the careful filtering of electromagnetic radiation, especially in the microwave regime, is critical for controlling the electromagnetic environment for experiments in fields such as solid-state quantum information processing and quantum metrology. We present a design for a filter consisting of small diameter dissipative coaxial cables that is straightforward to construct and provides a quantitatively predictable attenuation spectrum. We describe the fabrication process and demonstrate that the performance of the filters is in good agreement with theoretical modelling. We further perform an indicative test of the performance of the filters by making current-voltage measurements of small, underdamped Josephson Junctions at 15 mK and we present the results.

  10. Westinghouse filter update

    SciTech Connect

    Lippert, T.E.; Bruck, G.J.; Smeltzer, E.E.; Newby, R.A.; Bachovchin, D.M.

    1993-09-01

    Hot gas filters have been implemented and operated in four different test facilities: Subpilot scale entrained gasifier, located at the Texaco Montebello Research facilities in California, Foster Wheeler Advanced Pressurized Fluidized Bed Combustion pilot plant facilities, located in Livingston, New Jersey, Slipstream of the American Electric Power (AEP) 70 MW (electric) Tidd-PFBC, located in Brilliant, Ohio, and in the Ahlstrom 10 MW (thermal) Circulating PFBC facility, located in Karhula, Finland. Candle filter testing has occurred at all four facilities; cross flow filter testing has occurred at the Texaco and Foster Wheeler facilities. Table 1 identifies and summarizes the key operating characteristics of these facilities and the type and scale of filter unit tested. A brief description of each project is given.

  11. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  12. Ensemble Data Assimilation for Streamflow Forecasting: Experiments with Ensemble Kalman Filter and Particle Filter

    NASA Astrophysics Data System (ADS)

    Hirpa, F. A.; Gebremichael, M.; Hopson, T. M.; Wojick, R.

    2011-12-01

    We present results of data assimilation of ground discharge observation and remotely sensed soil moisture observations into Sacramento Soil Moisture Accounting (SACSMA) model in a small watershed (1593 km2) in Minnesota, the Unites States. Specifically, we perform assimilation experiments with Ensemble Kalman Filter (EnKF) and Particle Filter (PF) in order to improve streamflow forecast accuracy at six hourly time step. The EnKF updates the soil moisture states in the SACSMA from the relative errors of the model and observations, while the PF adjust the weights of the state ensemble members based on the likelihood of the forecast. Results of the improvements of each filter over the reference model (without data assimilation) will be presented. Finally, the EnKF and PF are coupled together to further improve the streamflow forecast accuracy.

  13. Application of the flexible fiber filter module (3FM) filter to sea water filtration.

    PubMed

    Jeanmaire, J-P; Suty, H; Marteil, P; Breant, P; Pedenaud, P

    2007-01-01

    The new 3FM filter (Flexible Fiber Filter Module), implementing very fine nylon fibers as filtration media was tested at pilot scale for the first time on sea water. The objective was to improve the quality of raw sea water to produce water for injection into offshore wells for extraction purposes on oil-bearing fields. Particles larger than 5 microm must be removed from the water of injection to avoid clogging at the point of injection into the porous rock. The purpose of the tests carried out over several months at Palavas Les Flots (France) was to specify the optimal operating conditions of the 3FM filter. Various coagulants and combinations of reagents were tested at velocities ranging between 50 and 200 m(3)/m(2)/h (ground filtration velocity). On raw sea water of about 1 NTU turbidity and at velocities of 100 m(3)/m(2)/h, the filtered water contained about 300 particles per mL larger than 1 microm, and less than 15 particles larger than 5 microm per mL. The filter runs range from one hour to few hours, variable according to the raw water turbidity, the reagent dosing rate and the filtration velocity. Backwashes, a succession of air scours at high air flow rates combined with water phases, the total duration of which did not exceed 1 minute, were shown to be efficient during the three months testing period. 3FM filter performance was promising for many other possible applications. PMID:18048989

  14. [A modified least mean square (LMS) algorithm with variable step-size for an adaptive noise canceller].

    PubMed

    Gao, Hui; Niu, Cong-min; Wu, Wei

    2002-10-01

    Objective. To study a modified Least Mean [correction of Mear] Square (LMS) algorithm that can be applied in aerospace and aviation fields. Method. A modified LMS algorithm was proposed and step-size was calculated by estimated signal noise ratio (SNR) of input signal. Parameters of filter were adjusted automatically until optimal estimation of disturbed speech was obtained. Result. Output SNR of estimated speech was increased respectively to 18.2 dB, 22.1 dB and 25.2 dB, when input SNR of disturbed speech was -6 dB, 0 dB, 6 dB. Conclusion. The performance of the modified algorithm has obvious advantages over that of NLMS algorithm. The results may be advantageous to related application of adaptive noise canceller (ANC) in aerospace and aviation fields. PMID:12449145

  15. A Modified Least Mean Square (LMS) Algorithm with Variable Step-size for an Adaptive Noise Canceller

    NASA Astrophysics Data System (ADS)

    Gao, Hui; Niu, Cong-min; Wu, Wei

    2002-10-01

    Objective. To study a modified Least Mear Square(LMS) algorithm that can be applied in aerospace and aviation fields. Method. A modified LMS algorithm was proposed and step-size was calculated by estimated signal noise ratio(SNR) of input signal. Parameters of filter were adjusted automatically until optimal estimation of disturbed speech was obtained. Result. Output SNR of estimated speech was increased respectively to 18.2 dB, 22.1 dB and 25.2 dB, when input SNR of disturbed speech was - 6 dB, 0 dB, 6 dB. Conclusion. The performance of the modified algorithm has obvious advantages over that of NLMS algorithm. The results may be advantageous to related application of adaptive noise canceller (ANC) in aerospace and aviation fields.

  16. Filtering as a reasoning-control strategy: An experimental assessment

    NASA Technical Reports Server (NTRS)

    Pollack, Martha E.

    1994-01-01

    In dynamic environments, optimal deliberation about what actions to perform is impossible. Instead, it is sometimes necessary to trade potential decision quality for decision timeliness. One approach to achieving this trade-off is to endow intelligent agents with meta-level strategies that provide them guidance about when to reason (and what to reason about) and when to act. We describe our investigations of a particular meta-level reasoning strategy, filtering, in which an agent commits to the goals it has already adopted, and then filters from consideration new options that would conflict with the successful completion of existing goals. To investigate the utility of filtering, a series of experiments was conducted using the Tileworld testbed. Previous experiments conducted by Kinny and Georgeff used an earlier version of the Tileworld to demonstrate the feasibility of filtering. Results are presented that replicate and extend those of Kinny and Georgeff and demonstrate some significant environmental influences on the value of filtering.

  17. Design of Ternary Correlation Filters to Reduce Probability of Error

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1994-01-01

    The problem of designing ternary phase and amplitude filters (TPAF's) that reduce the probability of image misclassification for a two-class image set is studied. The Fisher ratio is used as a measure of the correct classification rate, and an attempt is made to maximize this quantity in the filter designs. Given the nonanalytical nature of the design problem, a simulated annealing optimization technique is employed. Computer simulation results are presented for several cases including single in-class and out-of-class image sets and multiple image sets corresponding to the design of synthetic discriminant function filters. Significant improvements are found in expected rates of correct classification in comparison to binary phase-only filters and other TPAF designs. Approaches to accelerate the filter design process are also discussed.

  18. Anti-Glare Filters

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Glare from CRT screens has been blamed for blurred vision, eyestrain, headaches, etc. Optical Coating Laboratory, Inc. (OCLI) manufactures a coating to reduce glare which was used to coat the windows on the Gemini and Apollo spacecraft. In addition, OCLI offers anti-glare filters (Glare Guard) utilizing the same thin film coating technology. The coating minimizes brightness, provides enhanced contrast and improves readability. The filters are OCLI's first consumer product.

  19. Contactor/filter improvements

    DOEpatents

    Stelman, David (West Hills, CA)

    1989-01-01

    A contactor/filter arrangement for removing particulate contaminants from a gaseous stream includes a housing having a substantially vertically oriented granular material retention member with upstream and downstream faces, a substantially vertically oriented microporous gas filter element, wherein the retention member and the filter element are spaced apart to provide a zone for the passage of granular material therethrough. The housing further includes a gas inlet means, a gas outlet means, and means for moving a body of granular material through the zone. A gaseous stream containing particulate contaminants passes through the gas inlet means as well as through the upstream face of the granular material retention member, passing through the retention member, the body of granular material, the microporous gas filter element, exiting out of the gas outlet means. Disposed on the upstream face of the filter element is a cover screen which isolates the filter element from contact with the moving granular bed and collects a portion of the particulates so as to form a dust cake having openings small enough to exclude the granular material, yet large enough to receive the dust particles. In one embodiment, the granular material is comprised of prous alumina impregnated with CuO, with the cover screen cleaned by the action of the moving granular material as well as by backflow pressure pulses.

  20. NICMOS Filter Wheel Test

    NASA Astrophysics Data System (ADS)

    Wheeler, Thomas

    2009-07-01

    This is an engineering test {described in SMOV4 Activity Description NICMOS-04} to verify the aliveness, functionality, operability, and electro-mechanical calibration of the NICMOS filter wheel motors and assembly after NCS restart in SMOV4. This test has been designed to obviate concerns over possible deformation or breakage of the fitter wheel "soda-straw" shafts due to excess rotational drag torque and/or bending moments which may be imparted due to changes in the dewar metrology from warm-up/cool-down. This test should be executed after the NCS {and filter wheel housing} has reached and approximately equilibrated to its nominal operating temperature.Addition of visits G0 - G9 {9/9/09}: Ten visits copied from proposal 11868 {visits 20, 30, ..., 90, A0, B0}. Each visit moves two filter positions, takes lamp ON/OFF exposures and then moves back to the blank position. Visits G0, G1 and G2 will leave the filter wheels disabled. The remaining visits will leave the filter wheels enabled. There are sufficient in between times to allow for data download and analysis. In the case of problem is encountered, the filter wheels will be disabled through a real time command. The in between times are all set to 22-50 hours. It is preferable to have as short as possible in between time.