Science.gov

Sample records for optimized filtering step

  1. STEPS: A Grid Search Methodology for Optimized Peptide Identification Filtering of MS/MS Database Search Results

    SciTech Connect

    Piehowski, Paul D.; Petyuk, Vladislav A.; Sandoval, John D.; Burnum, Kristin E.; Kiebel, Gary R.; Monroe, Matthew E.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2013-03-01

    For bottom-up proteomics there are a wide variety of database searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection - referred to as STEPS - utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.

  2. Optimal filtering and filter stability of linear stochastic delay systems

    NASA Technical Reports Server (NTRS)

    Kwong, R. H.-S.; Willsky, A. S.

    1977-01-01

    Optimal filtering equations are obtained for very general linear stochastic delay systems. Stability of the optimal filter is studied in the case where there are no delays in the observations. Using the duality between linear filtering and control, asymptotic stability of the optimal filter is proved. Finally, the cascade of the optimal filter and the deterministic optimal quadratic control system is shown to be asymptotically stable as well.

  3. Optimization of integrated polarization filters.

    PubMed

    Gagnon, Denis; Dumont, Joey; Dziel, Jean-Luc; Dub, Louis J

    2014-10-01

    This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics. PMID:25360980

  4. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

    1998-04-30

    Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based on the issues identified. The two advanced barrier filter systems have been found to have the potential to be significantly more reliable and less expensive to operate than standard ceramic candle filter system designs. Their key development requirements are the assessment of the design and manufacturing feasibility of the ceramic filter elements, and the small-scale demonstration of their conceptual reliability and availability merits.

  5. Optimal PHD filter for single-target detection and tracking

    NASA Astrophysics Data System (ADS)

    Maher, Ronald

    2007-09-01

    The PHD filter has attracted much international interest since its introduction in 2000. It is based on two approximations. First, it is a first-order approximation of the multitarget Bayes filter. Second, to achieve closed-form formulas for the Bayes data-update step, the predicted multitarget probability distribution must be assumed Poisson. In this paper we show how to derive an optimal PHD (OPHD) filter, given that target number does not exceed one. (That is, we restrict ourselves to the single-target detection and tracking problem.) We further show that, assuming no more than a single target, the following are identical: (1) the multitarget Bayes filter; (2) the OPHD filter; (3) the CPHD filter; and (4) the multi-hypothesis correlation (MHC) filter. We also note that all of these are generalizations of the probabilistic data association (IPDA) filter of Musicki, Evans, and Stankovic.

  6. Fully optimal filter for ALLEGRO

    NASA Astrophysics Data System (ADS)

    Santostasi, Giovanni

    2004-03-01

    The FAST and SLOW filters are compared when applied to data from one-mode and two-mode resonant gravitational wave detectors. There is no substantial difference between the performance of two filters in the case of the one-mode detector. Notable reduction of the noise temperature is achieved for a two-mode detector when filtering the data with the FAST filter. We explain the principal reason for the better performance of the FAST filter with respect to the SLOW filter. We also observed that the performance of the FAST filter depends on the ratio Γ between the thermal narrow band noise and the SQUID amplifier white noise.

  7. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

    2002-06-30

    Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program through cold flow and high-temperature testing. The Blasch, mullite-bonded alumina sheet filter element is the only candidate currently approaching qualification for demonstration, although this oxide-based, monolithic sheet filter element may be restricted to operating temperatures of 538 C (1000 F) or less. Many other types of ceramic and intermetallic sheet filter elements could be fabricated. The estimated capital cost of the sheet filter system is comparable to the capital cost of the standard candle filter system, although this cost estimate is very uncertain because the commercial price of sheet filter element manufacturing has not been established. The development of the sheet filter system could result in a higher reliability and availability than the standard candle filter system, but not as high as that of the inverted candle filter system. The sheet filter system has not reached the same level of development as the inverted candle filter system, and it will require more design development, filter element fabrication development, small-scale testing and evaluation before larger-scale testing could be recommended.

  8. Adaptive Mallow's optimization for weighted median filters

    NASA Astrophysics Data System (ADS)

    Rachuri, Raghu; Rao, Sathyanarayana S.

    2002-05-01

    This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.

  9. Steps Toward Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2006-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum ofsers preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.

  10. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  11. Multi-step vortex filtering for phase extraction.

    PubMed

    Aguilar, Alberto; Dvila, Abundio; Garca-Mrquez, Jorge

    2014-04-01

    A quantized version of a continuous spiral phase filter with unitary topological charge, here denominated multi-step spiral phase filter (MSSPF), is proposed to extract phase from rotated spiral interferograms. Spiral interferograms are usually obtained from phase objects by registering the interference of its vortex filtered complex amplitude with a reference complex amplitude. The structure found in this kind of interferograms, depend on the number of steps used in the MSSPF that usually are assumed with an infinite number of steps for the continuous spiral phase filter. Reducing the number of steps of the MSSPF affects the residual phase error obtained after the phase extraction method. This error is therefore analysed here using a numerical simulation of a Mach-Zender interferometer with a MSSPF and a reduced number of steps. It is shown that, for our proposed method of rotated spiral interferograms, a residual error persists as the number of steps is increased approaching the residual error reported for the phase extraction method of single-shot spiral interferograms. Furthermore, it is shown that this novel technique can be applied without further modifications for phase contrast measurement. Experimental results show similar performance of this phase extraction technique, when compared to the results obtained with a commercial interferometer and with the numerical simulations. PMID:24718222

  12. Dual adaptive filtering by optimal projection applied to filter muscle artifacts on EEG and comparative study.

    PubMed

    Boudet, Samuel; Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

    2014-01-01

    Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

  13. Dual Adaptive Filtering by Optimal Projection Applied to Filter Muscle Artifacts on EEG and Comparative Study

    PubMed Central

    Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

    2014-01-01

    Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30?Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30?Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

  14. Optimal filtering of constant velocity torque data.

    PubMed

    Murray, D A

    1986-12-01

    The purpose of this investigation was to implement an optimal filtering strategy for processing in vivo dynamometric data. The validity of employing commonly accepted analog smoothing methods was also appraised. An inert gravitational model was used to assess the filtering requirements of two Cybex II constant velocity dynamometers at 10 pre-set speeds with three selected loads. Speed settings were recorded as percentages of the servomechanism's maximum tachometer feedback voltage (10 to 100% Vfb max). Spectral analyses of unsmoothed torque and associated angular displacement curves, followed by optimized low-pass digital filtering, revealed the presence of two superimposed contaminating influences: a damped oscillation, representing successive sudden braking and releasing of the servomechanism control system; a relatively stationary oscillatory series, which was attributed to the Cybex motor. The optimal cutoff frequency for any data set was principally a positive function of % Vfb max. This association was represented for each machine by a different, but reliable, third order least-squares polynomial, which could be used to accurately predict the correct smoothing required for any speed setting. Unacceptable errors may be induced, especially when measuring peak torques, if data are inappropriately filtered. Over-smoothing disguises inertial artefacts. The use of Cybex recorder damping settings should be discouraged. Optimal filtering is a minimal requirement of valid data processing. PMID:3784873

  15. Optimization Integrator for Large Time Steps.

    PubMed

    Gast, Theodore F; Schroeder, Craig; Stomakhin, Alexey; Jiang, Chenfanfu; Teran, Joseph M

    2015-10-01

    Practical time steps in today's state-of-the-art simulators typically rely on Newton's method to solve large systems of nonlinear equations. In practice, this works well for small time steps but is unreliable at large time steps at or near the frame rate, particularly for difficult or stiff simulations. We show that recasting backward Euler as a minimization problem allows Newton's method to be stabilized by standard optimization techniques with some novel improvements of our own. The resulting solver is capable of solving even the toughest simulations at the [Formula: see text] frame rate and beyond. We show how simple collisions can be incorporated directly into the solver through constrained minimization without sacrificing efficiency. We also present novel penalty collision formulations for self collisions and collisions against scripted bodies designed for the unique demands of this solver. Finally, we show that these techniques improve the behavior of Material Point Method (MPM) simulations by recasting it as an optimization problem. PMID:26357249

  16. Optimal design of active EMC filters

    NASA Astrophysics Data System (ADS)

    Chand, B.; Kut, T.; Dickmann, S.

    2013-07-01

    A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.

  17. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.

  18. Optimal time step for incompressible SPH

    NASA Astrophysics Data System (ADS)

    Violeau, Damien; Leroy, Agns

    2015-05-01

    A classical incompressible algorithm for Smoothed Particle Hydrodynamics (ISPH) is analyzed in terms of critical time step for numerical stability. For this purpose, a theoretical linear stability analysis is conducted for unbounded homogeneous flows, leading to an analytical formula for the maximum CFL (Courant-Friedrichs-Lewy) number as a function of the Fourier number. This gives the maximum time step as a function of the fluid viscosity, the flow velocity scale and the SPH discretization size (kernel standard deviation). Importantly, the maximum CFL number at large Reynolds number appears twice smaller than with the traditional Weakly Compressible (WCSPH) approach. As a consequence, the optimal time step for ISPH is only five times larger than with WCSPH. The theory agrees very well with numerical data for two usual kernels in a 2-D periodic flow. On the other hand, numerical experiments in a plane Poiseuille flow show that the theory overestimates the maximum allowed time step for small Reynolds numbers.

  19. On optimal infinite impulse response edge detection filters

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1991-01-01

    The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.

  20. Optimal digital filtering for tremor suppression.

    PubMed

    Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R

    2000-05-01

    Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:tremor-suppression.com. PMID:10851810

  1. GNSS data filtering optimization for ionospheric observation

    NASA Astrophysics Data System (ADS)

    D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.

    2015-12-01

    In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15°, 20° or 30°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy site under quiet ionospheric conditions, the SOLIDIFY optimization maximizes the quality, instead of the quantity, of the data.

  2. Program Computes SLM Inputs To Implement Optimal Filters

    NASA Technical Reports Server (NTRS)

    Barton, R. Shane; Juday, Richard D.; Alvarez, Jennifer L.

    1995-01-01

    Minimum Euclidean Distance Optimal Filter (MEDOF) program generates filters for use in optical correlators. Analytically optimizes filters on arbitrary spatial light modulators (SLMs) of such types as coupled, binary, fully complex, and fractional-2pi-phase. Written in C language.

  3. Optimal edge filters explain human blur detection.

    PubMed

    McIlhagga, William H; May, Keith A

    2012-01-01

    Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222

  4. Stepped Impedance Resonators in Triple Band Bandpass Filter Design for Wireless Communication Systems

    SciTech Connect

    Eroglu, Abdullah

    2010-01-01

    Triple band microstrip tri-section bandpass filter using stepped impedance resonators (SIRs) is designed, simulated, built, and measured using hair pin structure. The complete design procedure is given from analytical stage to implementation stage with details The coupling between SIRs is investigated for the first time in detail by studying their effect on the filter characteristics including bandwidth, and attenuation to optimize the filter perfomance. The simulation of the filler is performed using method of moment based 2.5D planar electromagnetic simulator The filter is then implemented on RO4003 material and measured The simulation, and measured results are compared and found to be my close. The effect of coupling on the filter performance is then investigated using electromagnetic simulator It is shown that the coupling effect between SIRs can be used as a design knob to obtain a bandpass Idler with a better performance jar the desired frequency band using the proposed filter topology The results of this work can used in wireless communication systems where multiple frequency bandy are needed

  5. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-01-01

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  6. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-12-31

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  7. Variable-step-size LMS adaptive filter for digital chromatic dispersion compensation in PDM-QPSK coherent transmission system

    NASA Astrophysics Data System (ADS)

    Xu, Tianhua; Jacobsen, Gunnar; Popov, Sergei; Li, Jie; Wang, Ke; Friberg, Ari T.

    2009-11-01

    High bit rates optical communication systems pose the challenge of their tolerance to linear and nonlinear fiber impairments. Digital filters in coherent optical receivers can be used to mitigate the chromatic dispersion entirely in the optical transmission system. In this paper, the least mean square adaptive filter has been developed for chromatic equalization in a 112-Gbit/s polarization division multiplexed quadrature phase shift keying coherent optical transmission system established on the VPIphotonics simulation platform. It is found that the chromatic dispersion equalization shows a better performance when a smaller step size is used. However, the smaller step size in least mean square filter will lead to a slower iterative operation to achieve the guaranteed convergence. In order to solve this contradiction, an adaptive filter employing variable-step-size least mean square algorithm is proposed to compensate the chromatic dispersion in the 112-Gbit/s coherent communication system. The variable-step-size least mean square filter could make a compromise and optimization between the chromatic dispersion equalization performance and the algorithm converging speed. Meanwhile, the required tap number and the converged tap weights distribution of the variable-step-size least mean square filter for a certain fiber chromatic dispersion are analyzed and discussed in the investigation of the filter feature.

  8. Multispectral image denoising with optimized vector bilateral filter.

    PubMed

    Peng, Honghong; Rao, Raghuveer; Dianat, Sohail A

    2014-01-01

    Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios (SNRs). Typical vector bilateral filtering described in the literature does not use parameters satisfying optimality criteria. We introduce an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimization of the Stein's unbiased risk estimate of this nonlinear estimator. Along the way, we provide a plausibility argument through an analytical example as to why vector bilateral filtering outperforms bandwise 2D bilateral filtering in enhancing SNR. Experimental results show that the optimized vector bilateral filter provides improved denoising performance on multispectral images when compared with several other approaches. PMID:24184727

  9. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  10. Optimal design of AC filter circuits in HVDC converter stations

    SciTech Connect

    Saied, M.M.; Khader, S.A.

    1995-12-31

    This paper investigates the reactive power as well as the harmonic conditions on both the valve and the AC-network sides of a HVDC converter station. The effect of the AC filter circuits is accurately modeled. The program is then augmented by adding an optimization routine. It can identify the optimal filter configuration, yielding the minimum current distortion factor at the AC network terminals for a prespecified fundamental reactive power to be provided by the filter. Several parameter studies were also conducted to illustrate the effect of accidental or intentional deletion of one of the filter branches.

  11. Optimal Filter Systems for Photometric Redshift Estimation

    NASA Astrophysics Data System (ADS)

    Benítez, N.; Moles, M.; Aguerri, J. A. L.; Alfaro, E.; Broadhurst, T.; Cabrera-Caño, J.; Castander, F. J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; Fernández-Soto, A.; González Delgado, R. M.; Infante, L.; Márquez, I.; Martínez, V. J.; Masegosa, J.; Del Olmo, A.; Perea, J.; Prada, F.; Quintana, J. M.; Sánchez, S. F.

    2009-02-01

    In the coming years, several cosmological surveys will rely on imaging data to estimate the redshift of galaxies, using traditional filter systems with 4-5 optical broad bands; narrower filters improve the spectral resolution, but strongly reduce the total system throughput. We explore how photometric redshift performance depends on the number of filters nf , characterizing the survey depth by the fraction of galaxies with unambiguous redshift estimates. For a combination of total exposure time and telescope imaging area of 270 hr m2, 4-5 filter systems perform significantly worse, both in completeness depth and precision, than systems with nf gsim 8 filters. Our results suggest that for low nf the color-redshift degeneracies overwhelm the improvements in photometric depth, and that even at higher nf the effective photometric redshift depth decreases much more slowly with filter width than naively expected from the reduction in the signal-to-noise ratio. Adding near-IR observations improves the performance of low-nf systems, but still the system which maximizes the photometric redshift completeness is formed by nine filters with logarithmically increasing bandwidth (constant resolution) and half-band overlap, reaching ~0.7 mag deeper, with 10% better redshift precision, than 4-5 filter systems. A system with 20 constant-width, nonoverlapping filters reaches only ~0.1 mag shallower than 4-5 filter systems, but has a precision almost three times better, δz = 0.014(1 + z) versus δz = 0.042(1 + z). We briefly discuss a practical implementation of such a photometric system: the ALHAMBRA Survey.

  12. Optimal filter bandwidth for pulse oximetry

    NASA Astrophysics Data System (ADS)

    Stuban, Norbert; Niwayama, Masatsugu

    2012-10-01

    Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.

  13. Geomagnetic modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Estes, R. H.

    1981-01-01

    The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

  14. Initial steps of inactivation at the K+ channel selectivity filter.

    PubMed

    Thomson, Andrew S; Heer, Florian T; Smith, Frank J; Hendron, Eunan; Bernèche, Simon; Rothberg, Brad S

    2014-04-29

    K(+) efflux through K(+) channels can be controlled by C-type inactivation, which is thought to arise from a conformational change near the channel's selectivity filter. Inactivation is modulated by ion binding near the selectivity filter; however, the molecular forces that initiate inactivation remain unclear. We probe these driving forces by electrophysiology and molecular simulation of MthK, a prototypical K(+) channel. Either Mg(2+) or Ca(2+) can reduce K(+) efflux through MthK channels. However, Ca(2+), but not Mg(2+), can enhance entry to the inactivated state. Molecular simulations illustrate that, in the MthK pore, Ca(2+) ions can partially dehydrate, enabling selective accessibility of Ca(2+) to a site at the entry to the selectivity filter. Ca(2+) binding at the site interacts with K(+) ions in the selectivity filter, facilitating a conformational change within the filter and subsequent inactivation. These results support an ionic mechanism that precedes changes in channel conformation to initiate inactivation. PMID:24733889

  15. Initial steps of inactivation at the K+ channel selectivity filter

    PubMed Central

    Thomson, Andrew S.; Heer, Florian T.; Smith, Frank J.; Hendron, Eunan; Bernche, Simon; Rothberg, Brad S.

    2014-01-01

    K+ efflux through K+ channels can be controlled by C-type inactivation, which is thought to arise from a conformational change near the channels selectivity filter. Inactivation is modulated by ion binding near the selectivity filter; however, the molecular forces that initiate inactivation remain unclear. We probe these driving forces by electrophysiology and molecular simulation of MthK, a prototypical K+ channel. Either Mg2+ or Ca2+ can reduce K+ efflux through MthK channels. However, Ca2+, but not Mg2+, can enhance entry to the inactivated state. Molecular simulations illustrate that, in the MthK pore, Ca2+ ions can partially dehydrate, enabling selective accessibility of Ca2+ to a site at the entry to the selectivity filter. Ca2+ binding at the site interacts with K+ ions in the selectivity filter, facilitating a conformational change within the filter and subsequent inactivation. These results support an ionic mechanism that precedes changes in channel conformation to initiate inactivation. PMID:24733889

  16. Time-domain split-step method with variable step-sizes in vectorial pulse propagation by using digital filters

    NASA Astrophysics Data System (ADS)

    Farhoudi, R.; Mehrany, K.

    2010-06-01

    Finite impulse response (FIR) and infinite impulse response (IIR) digital filters are proposed to allow for time-domain simulation of optical pulse propagation by using the operator-splitting technique. These filters simulate polarization mode dispersion and chromatic dispersion effects with acceptable accuracy in time-domain. An analytical relation between the coefficients of these filters and the simulation step-size is established to accommodate the possibility of carrying out the time-domain split-step method with variable split-step length at virtually no computational burden. The superiority of the proposed method over the conventional frequency-domain technique is particularly demonstrated in wavelength-division multiplexing (WDM) applications.

  17. Optimization of 2D median filtering algorithm for VLIW architecture

    NASA Astrophysics Data System (ADS)

    Choo, Chang Y.; Tang, Ming

    1999-12-01

    Recently, several commercial DSP processors with VLIW (Very Long Instruction Word) architecture were introduced. The VLIW architectures offer high performance over a wide range of multimedia applications that require parallel processing. In this paper, we implement an efficient 2D median filter for VLIW architecture, particularly for Texas Instrument C62x VLIW architecture. Median filter is widely used for filtering the impulse noise while preserving edges in still images and video. The efficient median filtering requires fast sorting. The sorting algorithms were optimized using software pipelining and loop unrolling to maximize the use of the available functional units while meeting the data dependency constraints. The paper describes and lists the optimized source code for the 3 X 3 median filter using an enhanced selection sort algorithm.

  18. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  19. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  20. State-space realizations of fractional-step delay digital filters with applications to array beamforming

    NASA Astrophysics Data System (ADS)

    Leung, S.-H.; Barnes, C. W.

    1984-04-01

    An approach to the design of fractional-step delay (FSD) digital filters based on a state-space formulation applicable to either finite impulse response (FIR) or infinite impulse response (IIR) is presented. FSD filters are single-rate, do not require sample rate changes, and are based on an offset impulse-invariant transformation of an interpolating filter design. Utilization of FIR or IIR FSD filters for beamforming effects spurious sidelobes in the array spatial beam pattern, but an appropriate design of the FSD filters magnitude response can suppress the sidelobes. Since the nonlinear phase characteristics of FSD filters do not influence spatial response of the array, computational efficiency and constraints on temporal phase distortion determine the choice of FIR or IIR implementation. Also determined is that FIR implementation is more efficient for FSD filters derived from interpolating filters with low transition ratios.

  1. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  2. Design of optimal correlation filters for hybrid vision systems

    NASA Astrophysics Data System (ADS)

    Rajan, Periasamy K.

    1990-12-01

    Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

  3. Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation

    PubMed Central

    Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

    2013-01-01

    This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

  4. Optimal filtering methods to structural damage estimation under ground excitation.

    PubMed

    Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

    2013-01-01

    This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

  5. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  6. Single step optimization of manipulator maneuvers with variable structure control

    NASA Technical Reports Server (NTRS)

    Chen, N.; Dwyer, T. A. W., III

    1987-01-01

    One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.

  7. Ares-I Bending Filter Design using a Constrained Optimization Approach

    NASA Technical Reports Server (NTRS)

    Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth

    2008-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.

  8. Optimization of filtering schemes for broadband astro-combs.

    PubMed

    Chang, Guoqing; Li, Chih-Hao; Phillips, David F; Szentgyorgyi, Andrew; Walsworth, Ronald L; Krtner, Franz X

    2012-10-22

    To realize a broadband, large-line-spacing astro-comb, suitable for wavelength calibration of astrophysical spectrographs, from a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three additional components: (1) one or more filter cavities to multiply the source-comb's repetition rate and thus line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highly nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb. In this paper we analyze the interplay of Fabry-Perot (FP) filter cavities with power amplifiers and nonlinear broadening fiber in the design of astro-combs optimized for radial-velocity (RV) calibration accuracy. We present analytic and numeric models and use them to evaluate a variety of FP filtering schemes (labeled as identical, co-prime, fraction-prime, and conjugate cavities), coupled to chirped-pulse amplification (CPA). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase RV error for spectrograph calibration. In general, filtering with two cavities prior to the CPA fiber amplifier outperforms an amplifier placed between the two cavities. In particular, filtering with conjugate cavities is able to provide <1 cm/s RV calibration error with >300 nm wavelength coverage. Such superior performance will facilitate the search for and characterization of Earth-like exoplanets, which requires <10 cm/s RV calibration error. PMID:23187265

  9. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  10. Fuzzy two-step filter for impulse noise reduction from color images.

    PubMed

    Schulte, Stefan; De Witte, Valrie; Nachtegael, Mike; Van der Weken, Dietrich; Kerre, Etienne E

    2006-11-01

    A new framework for reducing impulse noise from digital color images is presented, in which a fuzzy detection phase is followed by an iterative fuzzy filtering technique. We call this filter the fuzzy two-step color filter. The fuzzy detection method is mainly based on the calculation of fuzzy gradient values and on fuzzy reasoning. This phase determines three separate membership functions that are passed to the filtering step. These membership functions will be used as a representation of the fuzzy set impulse noise (one function for each color component). Our proposed new fuzzy method is especially developed for reducing impulse noise from color images while preserving details and texture. Experiments show that the proposed filter can be used for efficient removal of impulse noise from color images without distorting the useful information in the image. PMID:17076414

  11. Sub-Optimal Ensemble Filters and distributed hydrologic modeling: a new challenge in flood forecasting

    NASA Astrophysics Data System (ADS)

    Baroncini, F.; Castelli, F.

    2009-09-01

    Data assimilation techniques based on Ensemble Filtering are widely regarded as the best approach in solving forecast and calibration problems in geophysics models. Often the implementation of statistical optimal techniques, like the Ensemble Kalman Filter, is unfeasible because of the large amount of replicas used in each time step of the model for updating the error covariance matrix. Therefore the sub optimal approach seems to be a more suitable choice. Various sub-optimal techniques were tested in atmospheric and oceanographic models, some of them are based on the detection of a "null space". Distributed Hydrologic Models differ from the other geo-fluid-dynamics models in some fundamental aspects that make complex to understanding the relative efficiency of the different suboptimal techniques. Those aspects include threshold processes , preferential trajectories for convection and diffusion, low observability of the main state variables and high parametric uncertainty. This research study is focused on such topics and explore them through some numerical experiments on an continuous hydrologic model, MOBIDIC. This model include both water mass balance and surface energy balance, so it's able to assimilate a wide variety of datasets like traditional hydrometric "on ground" measurements or land surface temperature retrieval from satellite. The experiments that we present concern to a basin of 700 kmq in center Italy, with hourly dataset on a 8 months period that includes both drought and flood events, in this first set of experiment we worked on a low spatial resolution version of the hydrologic model (3.2 km). A new Kalman Filter based algorithm is presented : this filter try to address the main challenges of hydrological modeling uncertainty. The proposed filter use in Forecast step a COFFEE (Complementary Orthogonal Filter For Efficient Ensembles) approach with a propagation of both deterministic and stochastic ensembles to improve robustness and convergence proprieties. After, through a P.O.D. Reduction from control theory, we compute a Reduced Order Forecast Covariance matrix . In analysis step the filter uses a LE (Local Ensemble) Kalman Filter approach. We modify the LE Kalman Filter assimilation scheme and we adapt its formulation to the P.O.D. Reduced sub-space propagated in forecast step. Through this, assimilation of observations is made only in the maximum covariance directions of the model error. Then the efficiency of this technique is weighed in term of hydrometric forecast accuracy in a preliminary convergence test of a synthetic rainfall event toward a real rain fall event.

  12. Optimal filtering in multipulse sequences for nuclear quadrupole resonance detection

    NASA Astrophysics Data System (ADS)

    Osokin, D. Ya.; Khusnutdinov, R. R.; Mozzhukhin, G. V.; Rameev, B. Z.

    2014-05-01

    The application of the multipulse sequences in nuclear quadrupole resonance (NQR) detection of explosive and narcotic substances has been studied. Various approaches to increase the signal to noise ratio (SNR) of signal detection are considered. We discussed two modifications of the phase-alternated multiple-pulse sequence (PAMS): the 180 pulse sequence with a preparatory pulse and the 90 pulse sequence. The advantages of optimal filtering to detect NQR in the case of the coherent steady-state precession have been analyzed. It has been shown that this technique is effective in filtering high-frequency and low-frequency noise and increasing the reliability of NQR detection. Our analysis also shows the PAMS with 180 pulses is more effective than PSL sequence from point of view of the application of optimal filtering procedure to the steady-state NQR signal.

  13. Optimal Correlation Filters for Images with Signal-Dependent Noise

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Walkup, John F.

    1994-01-01

    We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.

  14. Na-Faraday rotation filtering: The optimal point

    PubMed Central

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  15. Optimization steps in a cuneiform inscription characterization process

    NASA Astrophysics Data System (ADS)

    Demoli, Nazif; Dahms, Uwe; Gruber, Hartmut; Wernicke, Guenther K.

    1996-12-01

    Recently, an investigation of using holographically based techniques for the cuneiform inscription characterization has been reported in several publications. This paper provides an overview of the development of the experimental systems and techniques. Particularly, we describe the main optimization steps as well as the selected correlation results, and the general frame of the future work.

  16. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  17. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  18. Degeneracy, frequency response and filtering in IMRT optimization.

    PubMed

    Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D; Promberger, Claus

    2004-07-01

    This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques. PMID:15285252

  19. Optimal color image restoration: Wiener filter and quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.

  20. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ?E error metric and a qualitative assessment. PMID:20519156

  1. Optimized Beam Sculpting with Generalized Fringe-rate Filters

    NASA Astrophysics Data System (ADS)

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina

    2016-03-01

    We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.

  2. Clever particle filters, sequential importance sampling and the optimal proposal

    NASA Astrophysics Data System (ADS)

    Snyder, Chris

    2014-05-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

  3. Fourier Spectral Filter Array for Optimal Multispectral Imaging.

    PubMed

    Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo

    2016-04-01

    Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data. PMID:26849867

  4. Multidisciplinary Analysis and Optimization Generation 1 and Next Steps

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia Gutierrez

    2008-01-01

    The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.

  5. Particulate Flow over a Backward Facing Step Preceding a Filter Medium

    NASA Astrophysics Data System (ADS)

    Chambers, Frank; Ravi, Krishna

    2010-11-01

    Computational Fluid Dynamic predictions were performed for particulate flows over a backward facing step with and without a filter downstream. The carrier phase was air and the monodisperse particles were dust with diameters of 1 to 50 microns. The step expansion ratio was 2:1, and the filter was located at 4.25 and 6.75 step heights downstream. Computations were performed for Reynolds numbers of 6550 and 10000. The carrier phase turbulence was modeled using the k-epsilon RNG model. The particles were modeled using a discrete phase model and particle dispersion was modeled using stochastic tracking. The filter was modeled as a porous medium, and the porous jump boundary condition was used. The particle boundary condition applied at the walls was "reflect" and at the filter was "trap." The presence of the porous medium showed a profound effect on the recirculation zone length, velocity profiles, and particle trajectories. The velocity profiles were compared to experiments. As particle size increased, the number of particles entering the recirculation zone decreased. The filter at the farther downstream location promoted more particles becoming trapped in the recirculation zone.

  6. Millimeter-wave GaAs stepped-impedance hairpin resonator filters using surface micromachining

    NASA Astrophysics Data System (ADS)

    Cho, Ju-Hyun; Yun, Tae-Soon; Baek, Tae-Jong; Ko, Back-Seok; Shin, Dong-Hoon; Lee, Jong-Chul

    2005-05-01

    In this paper, microstrip stepped-impedance hairpin resonator (SIR) low-pass filter (LPF) and slow-wave band-pass filter (BPF) using dielectric supported air-gapped microstrip line (DAML) of surface micromachining on GaAs substrate are proposed. The DAML structure, which is a new low-loss micromachining transmission line, is useful for the integration of MEMS and/or MMIC components. Design parameters for the proposed SIR low-pass and slow-wave band-pass filters are derived based on stepped-impedance theory. The proposed slow-wave BPF is designed to produce a passband of 10% at the fundamental frequency of 60 GHz. and a new SIR LPF with aperture and IDC (inter-digital capacitor) is designed for 3-dB cutoff frequency of 33 GHz. The measurement results of the BPF filter and LPF filter agree well with simulation results. These filters are useful for many millimeter-wave system applications.

  7. A multi-dimensional procedure for BNCT filter optimization

    SciTech Connect

    Lille, R.A.

    1998-02-01

    An initial version of an optimization code utilizing two-dimensional radiation transport methods has been completed. This code is capable of predicting material compositions of a beam tube-filter geometry which can be used in a boron neutron capture therapy treatment facility to improve the ratio of the average radiation dose in a brain tumor to that in the healthy tissue surrounding the tumor. The optimization algorithm employed by the code is very straightforward. After an estimate of the gradient of the dose ratio with respect to the nuclide densities in the beam tube-filter geometry is obtained, changes in the nuclide densities are made based on: (1) the magnitude and sign of the components of the dose ratio gradient, (2) the magnitude of the nuclide densities, (3) the upper and lower bound of each nuclide density, and (4) the linear constraint that the sum of the nuclide density fractions in each material zone be less than or equal to 1.0. A local optimal solution is assumed to be found when one of the following conditions is satisfied in every material zone: (1) the maximum positive component of the gradient corresponds to a nuclide at its maximum density and the sum of the density fractions equals 1.0 or, and (2) the positive and negative components of the gradient correspond to nuclides densities at their upper and lower bounds, respectively, and the remaining components of the gradient are sufficiently small. The optimization procedure has been applied to a beam tube-filter geometry coupled to a simple tumor-patient head model and an improvement of 50% in the dose ratio was obtained.

  8. Performance evaluation of iterated extended Kalman filter with variable step-length

    NASA Astrophysics Data System (ADS)

    Havlk, Jind?ich; Straka, Ond?ej

    2015-11-01

    The paper deals with state estimation of nonlinear stochastic dynamic systems. In particular, the iterated extended Kalman filter is studied. Three recently proposed iterated extended Kalman filter algorithms are analyzed in terms of their performance and specification of a user design parameter, more specifically the step-length size. The performance is compared using the root mean square error evaluating the state estimate and the noncredibility index assessing covariance matrix of the estimate error. The performance and influence of the design parameter, are analyzed in a numerical simulation.

  9. "The Design of a Compact, Wide Spurious-Suppression Bandwidth Bandpass Filter Using Stepped Impedance Resonators"

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop; Wollack, Edward J.; Doiron, Terence; Papapolymerou, John; Laskar, Joy

    2005-01-01

    We propose an analytical design for a microstrip broadband spurious-suppression filter. The proposed design uses every section of the transmission lines as both a coupling and a spurious suppression element, which creates a very compact, planar filter. While a traditional filter length is greater than the multiple of the quarter wavelength at the center passband frequency (lambda(sub g)/4), the proposed filter length is less than (order n(Ssup th) + l)center dot lambda(sub g)/8. The filter s spurious response and physical dimension are controlled by the step impedance ratio (R) between two transmission line sections as a lambda(sub g)/4 resonator. The experimental result shows that, with R of 0.2, the out-of-band attenuation is greater than 40 dB; and the first spurious mode is shifted to more than 5 times the fundamental frequency. Moreover, it is the most compact planar filter design to date. The results also indicate a low in-band insertion loss.

  10. Neuromuscular fiber segmentation through particle filtering and discrete optimization

    NASA Astrophysics Data System (ADS)

    Dietenbeck, Thomas; Varray, Franois; Kybic, Jan; Basset, Olivier; Cachard, Christian

    2014-03-01

    We present an algorithm to segment a set of parallel, intertwined and bifurcating fibers from 3D images, targeted for the identification of neuronal fibers in very large sets of 3D confocal microscopy images. The method consists of preprocessing, local calculation of fiber probabilities, seed detection, tracking by particle filtering, global supervised seed clustering and final voxel segmentation. The preprocessing uses a novel random local probability filtering (RLPF). The fiber probabilities computation is performed by means of SVM using steerable filters and the RLPF outputs as features. The global segmentation is solved by discrete optimization. The combination of global and local approaches makes the segmentation robust, yet the individual data blocks can be processed sequentially, limiting memory consumption. The method is automatic but efficient manual interactions are possible if needed. The method is validated on the Neuromuscular Projection Fibers dataset from the Diadem Challenge. On the 15 first blocks present, our method has a 99.4% detection rate. We also compare our segmentation results to a state-of-the-art method. On average, the performances of our method are either higher or equivalent to that of the state-of-the-art method but less user interactions is needed in our approach.

  11. Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.

    PubMed

    McMinn, Brian R

    2013-11-01

    Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. PMID:23796954

  12. Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.

    PubMed

    Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal

    2013-11-01

    In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. PMID:23958491

  13. Optimal Filtering in Mass Transport Modeling From Satellite Gravimetry Data

    NASA Astrophysics Data System (ADS)

    Ditmar, P.; Hashemi Farahani, H.; Klees, R.

    2011-12-01

    Monitoring natural mass transport in the Earth's system, which has marked a new era in Earth observation, is largely based on the data collected by the GRACE satellite mission. Unfortunately, this mission is not free from certain limitations, two of which are especially critical. Firstly, its sensitivity is strongly anisotropic: it senses the north-south component of the mass re-distribution gradient much better than the east-west component. Secondly, it suffers from a trade-off between temporal and spatial resolution: a high (e.g., daily) temporal resolution is only possible if the spatial resolution is sacrificed. To make things even worse, the GRACE satellites enter occasionally a phase when their orbit is characterized by a short repeat period, which makes it impossible to reach a high spatial resolution at all. A way to mitigate limitations of GRACE measurements is to design optimal data processing procedures, so that all available information is fully exploited when modeling mass transport. This implies, in particular, that an unconstrained model directly derived from satellite gravimetry data needs to be optimally filtered. In principle, this can be realized with a Wiener filter, which is built on the basis of covariance matrices of noise and signal. In practice, however, a compilation of both matrices (and, therefore, of the filter itself) is not a trivial task. To build the covariance matrix of noise in a mass transport model, it is necessary to start from a realistic model of noise in the level-1B data. Furthermore, a routine satellite gravimetry data processing includes, in particular, the subtraction of nuisance signals (for instance, associated with atmosphere and ocean), for which appropriate background models are used. Such models are not error-free, which has to be taken into account when the noise covariance matrix is constructed. In addition, both signal and noise covariance matrices depend on the type of mass transport processes under investigation. For instance, processes of hydrological origin occur at short time scales, so that the input time series is typically short (1 month or less), which implies a relatively strong noise in the derived model. On the contrary, study of a long-term ice mass depletion requires a long time series of satellite data, which leads to a reduction of noise in the mass transport model. Of course, the spatial pattern (and therefore, the signal covariance matrices) of various mass transport processes are also very different. In the presented study, we compare various strategies to build the signal and noise covariance matrices in the context of mass transport modeling. In this way, we demonstrate the benefits of an accurate construction of an optimal filter as outlined above, compared to simplified strategies. Furthermore, we consider both models based on GRACE data alone and combined GRACE/GOCE models. In this way, we shed more light on a potential synergy of the GRACE and GOCE satellite mission. This is important nor only for the best possible mass transport modeling on the basis of all available data, but also for the optimal planning of future satellite gravity missions.

  14. Optimal design of multichannel fiber Bragg grating filters using Pareto multi-objective optimization algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Liu, Tundong; Jiang, Hao

    2016-01-01

    A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.

  15. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690

  16. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering

    PubMed Central

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V.

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690

  17. Effect of embedded unbiasedness on discrete-time optimal FIR filtering estimates

    NASA Astrophysics Data System (ADS)

    Zhao, Shunyi; Shmaliy, Yuriy S.; Liu, Fei; Ibarra-Manzano, Oscar; Khan, Sanowar H.

    2015-12-01

    Unbiased estimation is an efficient alternative to optimal estimation when the noise statistics are not fully known and/or the model undergoes temporary uncertainties. In this paper, we investigate the effect of embedded unbiasedness (EU) on optimal finite impulse response (OFIR) filtering estimates of linear discrete time-invariant state-space models. A new OFIR-EU filter is derived by minimizing the mean square error (MSE) subject to the unbiasedness constraint. We show that the OFIR-UE filter is equivalent to the minimum variance unbiased FIR (UFIR) filter. Unlike the OFIR filter, the OFIR-EU filter does not require the initial conditions. In terms of accuracy, the OFIR-EU filter occupies an intermediate place between the UFIR and OFIR filters. Contrary to the UFIR filter which MSE is minimized by the optimal horizon of N opt points, the MSEs in the OFIR-EU and OFIR filters diminish with N and these filters are thus full-horizon. Based upon several examples, we show that the OFIR-UE filter has higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR and Kalman filters.

  18. Optimization of the performances of correlation filters by pre-processing the input plane

    NASA Astrophysics Data System (ADS)

    Bouzidi, F.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Fakhfakh, A.

    2016-01-01

    We report findings on the optimization of the performances of correlation filters. First, we propound and validate an optimization of ROC curves adapted to correlation technique. Then, analysis suggests that a pre-processing of the input plane leads to a compromise between the robustness of the adapted filter and the discrimination of the inverse filter for face recognition applications. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase the performances of a VanderLugt correlator.

  19. Optimal digital filters for long-latency components of the event-related brain potential.

    PubMed

    Farwell, L A; Martinerie, J M; Bashore, T R; Rapp, P E; Goddard, P H

    1993-05-01

    A fundamentally important problem for cognitive psychophysiologists is selection of the appropriate off-line digital filter to extract signal from noise in the event-related brain potential (ERP) recorded at the scalp. Investigators in the field typically use a type of finite impulse response (FIR) filter known as moving average or boxcar filter to achieve this end. However, this type of filter can produce significant amplitude diminution and distortion of the shape of the ERP waveform. Thus, there is a need to identify more appropriate filters. In this paper, we compare the performance of another type of FIR filter that, unlike the boxcar filter, is designed with an optimizing algorithm that reduces signal distortion and maximizes signal extraction (referred to here as an optimal FIR filter). We applied several different filters of both types to ERP data containing the P300 component. This comparison revealed that boxcar filters reduced the contribution of high-frequency noise to the ERP but in so doing produced a substantial attenuation of P300 amplitude and, in some cases, substantial distortions of the shape of the waveform, resulting in significant errors in latency estimation. In contrast, the optimal FIR filters preserved P300 amplitude, morphology, and latency and also eliminated high-frequency noise more effectively than did the boxcar filters. The implications of these results for data acquisition and analysis are discussed. PMID:8497560

  20. The Optimal Design of Weighted Order Statistics Filters by Using Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Yao, Chih-Chia; Yu, Pao-Ta

    2006-12-01

    Support vector machines (SVMs), a classification algorithm for the machine learning community, have been shown to provide higher performance than traditional learning machines. In this paper, the technique of SVMs is introduced into the design of weighted order statistics (WOS) filters. WOS filters are highly effective, in processing digital signals, because they have a simple window structure. However, due to threshold decomposition and stacking property, the development of WOS filters cannot significantly improve both the design complexity and estimation error. This paper proposes a new designing technique which can improve the learning speed and reduce the complexity of designing WOS filters. This technique uses a dichotomous approach to reduce the Boolean functions from 255 levels to two levels, which are separated by an optimal hyperplane. Furthermore, the optimal hyperplane is gotten by using the technique of SVMs. Our proposed method approximates the optimal weighted order statistics filters more rapidly than the adaptive neural filters.

  1. Pattern recognition with composite correlation filters designed with multi-objective combinatorial optimization

    NASA Astrophysics Data System (ADS)

    Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul

    2015-03-01

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  2. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  3. Farrow structure implementation of fractional delay filter optimal in Chebyshev sense

    NASA Astrophysics Data System (ADS)

    Blok, Marek

    2006-03-01

    In this paper the problem of variable delay filter implementation based on the Farrow structure is discussed. The idea of such an implementation is to calculate, for each required delay, coefficients of fractional delay filter impulse response using delay independent polynomials. This approach leads to significant decrease of computational costs in applications which require frequent delay changes. Achieved computational complexity reduction is especially important in case of recursive optimal filters design methods. In this paper we demonstrate that quality and properties of fractional delay filters optimal in Chebyshev sense can be retained even for low orders of the Farrow structure.

  4. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  5. Photorefractive two-beam coupling optimal thresholding filter for additive signal-dependent noise reduction

    NASA Astrophysics Data System (ADS)

    Fu, Jack; Khoury, Jehad; Cronin-Golomb, Mark; Woods, Charles L.

    1995-01-01

    Computer simulations of photorefractive thresholding filters for the reduction of artifact or dust noise demonstrate an increase in signal-to-noise ratio (SNR) of 70% to 95%, respectively, of that provided by the Wiener filter for inputs with a SNR of approximately 3. These simple, nearly optimal filters use a spectral thresholding profile that is proportional to the envelope of the noise spectrum. Alternative nonlinear filters with either 1/ nu or constant thresholding profiles increase the SNR almost as much as the noise-envelope thresholding filter.

  6. Optease Vena Cava Filter Optimal Indwelling Time and Retrievability

    SciTech Connect

    Rimon, Uri Bensaid, Paul Golan, Gil Garniek, Alexander Khaitovich, Boris; Dotan, Zohar; Konen, Eli

    2011-06-15

    The purpose of this study was to assess the indwelling time and retrievability of the Optease IVC filter. Between 2002 and 2009, a total of 811 Optease filters were inserted: 382 for prophylaxis in multitrauma patients and 429 for patients with venous thromboembolic (VTE) disease. In 139 patients [97 men and 42 women; mean age, 36 (range, 17-82) years], filter retrieval was attempted. They were divided into two groups to compare change in retrieval policy during the years: group A, 60 patients with filter retrievals performed before December 31 2006; and group B, 79 patients with filter retrievals from January 2007 to October 2009. A total of 128 filters were successfully removed (57 in group A, and 71 in group B). The mean filter indwelling time in the study group was 25 (range, 3-122) days. In group A the mean indwelling time was 18 (range, 7-55) days and in group B 31 days (range, 8-122). There were 11 retrieval failures: 4 for inability to engage the filter hook and 7 for inability to sheathe the filter due to intimal overgrowth. The mean indwelling time of group A retrieval failures was 16 (range, 15-18) days and in group B 54 (range, 17-122) days. Mean fluoroscopy time for successful retrieval was 3.5 (range, 1-16.6) min and for retrieval failures 25.2 (range, 7.2-62) min. Attempts to retrieve the Optease filter can be performed up to 60 days, but more failures will be encountered with this approach.

  7. Optimized digital filtering techniques for radiation detection with HPGe detectors

    NASA Astrophysics Data System (ADS)

    Salathe, Marco; Kihm, Thomas

    2016-02-01

    This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.

  8. Optimal and unbiased FIR filtering in discrete time state space with smoothing and predictive properties

    NASA Astrophysics Data System (ADS)

    Shmaliy, Yuriy S.; Ibarra-Manzano, Oscar

    2012-12-01

    We address p-shift finite impulse response optimal (OFIR) and unbiased (UFIR) algorithms for predictive filtering ( p > 0), filtering ( p = 0), and smoothing filtering ( p < 0) at a discrete point n over N neighboring points. The algorithms were designed for linear time-invariant state-space signal models with white Gaussian noise. The OFIR filter self-determines the initial mean square state function by solving the discrete algebraic Riccati equation. The UFIR one represented both in the batch and iterative Kalman-like forms does not require the noise covariances and initial errors. An example of applications is given for smoothing and predictive filtering of a two-state polynomial model. Based upon this example, we show that exact optimality is redundant when N ? 1 and still a nice suboptimal estimate can fairly be provided with a UFIR filter at a much lower cost.

  9. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J.; Kapteyn, Henry C.

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  10. Bio-desulfurization of biogas using acidic biotrickling filter with dissolved oxygen in step feed recirculation.

    PubMed

    Chaiprapat, Sumate; Charnnok, Boonya; Kantachote, Duangporn; Sung, Shihwu

    2015-03-01

    Triple stage and single stage biotrickling filters (T-BTF and S-BTF) were operated with oxygenated liquid recirculation to enhance bio-desulfurization of biogas. Empty bed retention time (EBRT 100-180 s) and liquid recirculation velocity (q 2.4-7.1 m/h) were applied. H2S removal and sulfuric acid recovery increased with higher EBRT and q. But the highest q at 7.1 m/h induced large amount of liquid through the media, causing a reduction in bed porosity in S-BTF and H2S removal. Equivalent performance of S-BTF and T-BTF was obtained under the lowest loading of 165 gH2S/m(3)/h. In the subsequent continuous operation test, it was found that T-BTF could maintain higher H2S elimination capacity and removal efficiency at 175.6±41.6 gH2S/m(3)/h and 89.0±6.8% versus S-BTF at 159.9±42.8 gH2S/m(3)/h and 80.1±10.2%, respectively. Finally, the relationship between outlet concentration and bed height was modeled. Step feeding of oxygenated liquid recirculation in multiple stages clearly demonstrated an advantage for sulfide oxidation. PMID:25569031

  11. Optimization of continuous tube motion and step-and-shoot motion in digital breast tomosynthesis systems with patient motion

    NASA Astrophysics Data System (ADS)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2012-03-01

    In digital breast tomosynthesis (DBT), a reconstruction of the breast is generated from projections acquired over a limited range of x-ray tube angles. There are two principal schemes for acquiring projections, continuous tube motion and step-and-shoot motion. Although continuous tube motion has the benefit of reducing patient motion by lowering scan time, it has the drawback of introducing blurring artifacts due to focal spot motion. The purpose of this work is to determine the optimal scan time which minimizes this trade-off. To this end, the filtered backprojection reconstruction of a sinusoidal input is calculated. At various frequencies, the optimal scan time is determined by the value which maximizes the modulation of the reconstruction. Although prior authors have studied the dependency of the modulation on focal spot motion, this work is unique in also modeling patient motion. It is shown that because continuous tube motion and patient motion have competing influences on whether scan time should be long or short, the modulation is maximized by an intermediate scan time. This optimal scan time decreases with object velocity and increases with exposure time. To optimize step-and-shoot motion, we calculate the scan time for which the modulation attains the maximum value achievable in a comparable system with continuous tube motion. This scan time provides a threshold below which the benefits of step-and-shoot motion are justified. In conclusion, this work optimizes scan time in DBT systems with patient motion and either continuous tube motion or step-and-shoot motion by maximizing the modulation of the reconstruction.

  12. Optimization of atomic Faraday filters in the presence of homogeneous line broadening

    NASA Astrophysics Data System (ADS)

    Zentile, Mark A.; Keaveney, James; Mathew, Renju S.; Whiting, Daniel J.; Adams, Charles S.; Hughes, Ifan G.

    2015-09-01

    We show that homogeneous line broadening drastically affects the performance of atomic Faraday filters. We study the effects of cell length and find that the behaviour of line-centre filters are quite different from wing-type filters, where the effect of self-broadening is found to be particularly important. We use a computer optimization algorithm to find the best magnetic field and temperature for Faraday filters with a range of cell lengths, and experimentally realize one particular example using a micro-fabricated 87Rb vapour cell. We find excellent agreement between our theoretical model and experimental data.

  13. Optimized filtering of regional and teleseismic seismograms: results of maximizing SNR measurements from the wavelet transform and filter banks

    SciTech Connect

    Leach, R.R.; Schultz, C.; Dowla, F.

    1997-07-15

    Development of a worldwide network to monitor seismic activity requires deployment of seismic sensors in areas which have not been well studied or may have from available recordings. Development and testing of detection and discrimination algorithms requires a robust representative set of calibrated seismic events for a given region. Utilizing events with poor signal-to-noise (SNR) can add significant numbers to usable data sets, but these events must first be adequately filtered. Source and path effects can make this a difficult task as filtering demands are highly varied as a function of distance, event magnitude, bearing, depth etc. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. In addition, filter parameters are often overly generalized or contain complicated switching. We have developed a method to provide an optimized filter for any regional or teleseismically recorded event. Recorded seismic signals contain arrival energy which is localized in frequency and time. Localized temporal signals whose frequency content is different from the frequency content of the pre-arrival record are identified using rms power measurements. The method is based on the decomposition of a time series into a set of time series signals or scales. Each scale represents a time-frequency band with a constant Q. SNR is calculated for a pre-event noise window and for a window estimated to contain the arrival. Scales with high SNR are used to indicate the band pass limits for the optimized filter.The results offer a significant improvement in SNR particularly for low SNR events. Our method provides a straightforward, optimized filter which can be immediately applied to unknown regions as knowledge of the geophysical characteristics is not required. The filtered signals can be used to map the seismic frequency response of a region and may provide improvements in travel-time picking, bearing estimation regional characterization, and event detection. Results are shown for a set of low SNR events as well as 92 regional and teleseismic events in the Middle East.

  14. An Efficient and Optimal Filter for Identifying Point Sources in Millimeter/Submillimeter Wavelength Sky Maps

    NASA Astrophysics Data System (ADS)

    Perera, T. A.; Wilson, G. W.; Scott, K. S.; Austermann, J. E.; Schaar, J. R.; Mancera, A.

    2013-07-01

    A new technique for reliably identifying point sources in millimeter/submillimeter wavelength maps is presented. This method accounts for the frequency dependence of noise in the Fourier domain as well as nonuniformities in the coverage of a field. This optimal filter is an improvement over commonly-used matched filters that ignore coverage gradients. Treating noise variations in the Fourier domain as well as map space is traditionally viewed as a computationally intensive problem. We show that the penalty incurred in terms of computing time is quite small due to casting many of the calculations in terms of FFTs and exploiting the absence of sharp features in the noise spectra of observations. Practical aspects of implementing the optimal filter are presented in the context of data from the AzTEC bolometer camera. The advantages of using the new filter over the standard matched filter are also addressed in terms of a typical AzTEC map.

  15. Optimal realizable filters and the minimum Euclidean distance principle. [for spatial light modulators

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1993-01-01

    Minimizing a Euclidean distance in the complex plane optimizes a wide class of correlation metrics for filters implemented on realistic devices. The algorithm searches over no more than two real scalars (gain and phase). It unifies a variety of previous solutions for special cases (e.g., a maximum signal-to-noise ratio with colored noise and a real filter and a maximum correlation intensity with no noise and a coupled filter). It extends optimal partial information filter theory to arbitrary spatial light modulators (fully complex, coupled, discrete, finite contrast ratio, and so forth), additive input noise (white or colored), spatially nonuniform filter modulators, and additive correlation detection noise (including signal dependent noise).

  16. Design of composite correlation filters for object recognition using multi-objective combinatorial optimization

    NASA Astrophysics Data System (ADS)

    Serrano Trujillo, Alejandra; Daz Ramrez, Vctor H.; Trujillo, Leonardo

    2013-09-01

    Correlation filters for object recognition represent an attractive alternative to feature based methods. These filters are usually synthesized as a combination of several training templates. These templates are commonly chosen in an ad-hoc manner by the designer, therefore, there is no guarantee that the best set of templates is chosen. In this work, we propose a new approach for the design of composite correlation filters using a multi-objective evolutionary algorithm in conjunction with a variable length coding technique. Given a vast search space of feasible templates, the algorithm finds a subset that allows the construction of a filter with an optimized performance in terms of several performance metrics. The resultant filter is capable of recognizing geometrically distorted versions of a target in high cluttering and noisy conditions. Computer simulation results obtained with the proposed approach are presented and discussed in terms of several performance metrics. These results are also compared to those obtained with existing correlation filters.

  17. The optimal design of photonic crystal optical devices with step-wise linear refractive index

    NASA Astrophysics Data System (ADS)

    Ma, Ji; Wu, Xiang-Yao; Li, Hai-Bo; Li, Hong; Liu, Xiao-Jing; Zhang, Si-Qi; Chen, Wan-Jin; Wu, Yi-Heng

    2015-10-01

    In the paper, we have studied one-dimensional step-wise linear photonic crystal with and without defect layer, and analyzed the effect of defect layer position, thickness, refractive index real part and imaginary part on the transmissivity, electric field distribution and output electric field intensity. By calculation, we have obtained a set of optimal parameters, which can be optimally designed optical device, such as optical amplifier, attenuator, optical diode by the step-wise linear photonic crystal.

  18. Optimizing Fourier filtering for digital holographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Ooms, Thomas; Koek, Wouter; Braat, Joseph; Westerweel, Jerry

    2006-02-01

    In digital holographic particle image velocimetry, the particle image depth-of-focus and the inaccuracy of the measured particle position along the optical axis are relatively large in comparison to the characteristic transverse dimension of the reconstructed particle images. This is the result of a low optical numerical aperture (NA), which is limited by the relatively large pixel size of the CCD camera. Additionally, the anisotropic light scattering behaviour of the seeding particles further reduces the effective numerical aperture of the optical system and substantially increases the particle image depth-of-focus. Introducing an appropriate Fourier filter can significantly suppress this additional reduction of the NA. Experimental results illustrate that an improved Fourier filter reduces the particle image depth-of-focus. For the system described in this paper, this improvement is nearly a factor of 5. Using the improved Fourier filter comes with an acceptable reduction of the hologram intensity, so an extended exposure time is needed to maintain the exposure level.

  19. Linear adaptive noise-reduction filters for tomographic imaging: Optimizing for minimum mean square error

    SciTech Connect

    Sun, W Y

    1993-04-01

    This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

  20. Single step optimization of feedback-decoupled collision avoidance manipulator maneuvers

    NASA Technical Reports Server (NTRS)

    Chen, N.; Dwyer, T. A. W., III; Fadali, M. S.

    1986-01-01

    Simultaneous robot path planning and path following is shown to be achievable in the presence of motor saturation and obstacle avoidance requirements. The discrete time algorithm derived performs one step ahead mean square optimization of commanded joint accelerations, subject to present actuator force or torque constraints and N step ahead prediction of configuration constraints.

  1. Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared spectroscopy

    PubMed Central

    Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.

    2011-01-01

    An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm?1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm?1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

  2. An infeasible interior-point algorithm with full-Newton step for linear optimization

    NASA Astrophysics Data System (ADS)

    Liu, Zhongyi; Sun, Wenyu

    2007-10-01

    Recently, Roos (SIAM J Optim 16(4):1110?1136, 2006) presented a primal-dual infeasible interior-point algorithm that uses full-Newton steps and whose iteration bound coincides with the best known bound for infeasible interior-point algorithms. In the current paper we use a different feasibility step such that the definition of the feasibility step in Mansouri and Roos (Optim Methods Softw 22(3):519?530, 2007) is a special case of our definition, and show that the same result on the order of iteration complexity can be obtained.

  3. Optimization of the fine structure and flow behavior of anisotropic porous filters, synthesized by SLS method

    NASA Astrophysics Data System (ADS)

    Shishkovsky, I.; Sherbakov, V.; Pitrov, A.

    2007-06-01

    The main goal of the work was optimization of the phase and porous fine structures of filter elements and subsequent laser synthesis by the method layer-by-layer Selective Laser Sintering (SLS) of functional devices, exploration of their properties and requirements of synthesis. Common methodical approaches are developed by the searching optimal requirements of layer-by-layer synthesis usable to different powder compositions and concrete guidelines (conditions of sintering, powder composition, etc.) for SLS of filter elements (including anisotropic) from metal-polymer powder mixture - brass + polycarbonate{PC} = 6:1. As a result of numerical simulations it designed an original graph - numerical procedure and represented a computer program for definition of flow filter performances, as homogeneous (isotropic) as heterogeneous (anisotropic), having the cylindrical shape. Calculation of flow behavior for anisotropic filter elements allows predicting their future applications and managing its.

  4. On the application of optimal wavelet filter banks for ECG signal classification

    NASA Astrophysics Data System (ADS)

    Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvo, R. K. H.

    2014-03-01

    This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.

  5. Three-Dimensional Micro Propeller Design by Using Efficient Two Step Optimization

    NASA Astrophysics Data System (ADS)

    Lee, Ki-Hak; Jeon, Yong-Hee; Kim, Kyu-Hong; Lee, Dong-Ho; Lee, Kyung-Tae

    A practical and efficient optimal design procedure is presented for three-dimensional micro-propeller. To manage many design related variables and operating conditions efficiently, the design procedure consists of two steps for optimization of operating conditions and blade geometries. First, operating condition points are extracted from the design-of-experiments, and provided as the input data of the geometry optimization step. Next, in the geometry optimization step, the 2-D airfoil shapes are optimized to provide the maximum lift-to-drag ratio along the radial blade section by using the XFOIL code, and the 3-D blade shapes are determined at the each operating condition by using the minimum energy loss method. Then, the performances of the optimized blade are calculated, and a Response Surface Model is constructed to decide the operating condition for the maximum propeller efficiency. To find the blade shape with better performance than the optimum shape in the initial design space, the design space is modified to a highly feasible design space by using the probability approach. Finally, the performance of the optimized propeller is compared with that of the Black Widow MAV propeller. The comparison showed that the optimized propeller had somewhat better performance. The present optimal design procedure is reliable and can be used as a practical design tool for micro propeller development.

  6. Optimal implementation approach for discrete wavelet transform using FIR filter banks on FPGAs

    NASA Astrophysics Data System (ADS)

    Sargunaraj, Joe J.; Rao, Sathyanarayana S.

    1998-10-01

    We present a wavelet transform implementation approach using a FIR filter bank that uses a Wallace Tree structure for fast multiplication. VHDL models targeted specifically for synthesize have been written for clocked data registers, adders and the multiplier. Symmetric wavelets like Biorthogonal wavelets can be implemented using this design. By changing the input filter coefficients different wavelet decompositions may be implemented. The design is mapped onto the ORCA series FPGA after synthesis and optimization for timing and area.

  7. Optimizing the Choice of Filter Sets for Space Based Imaging Instruments

    NASA Astrophysics Data System (ADS)

    Elliott, Rachel E.; Farrah, Duncan; Petty, Sara M.; Harris, Kathryn Amy

    2015-01-01

    We investigate the challenge of selecting a limited number of filters for space based imaging instruments such that they are able to address multiple heterogeneous science goals. The number of available filter slots for a mission is bounded by factors such as instrument size and cost. We explore methods used to extract the optimal group of filters such that they complement each other most effectively. We focus on three approaches; maximizing the separation of objects in two-dimensional color planes, SED fitting to select those filter sets that give the finest resolution in fitted physical parameters, and maximizing the orthogonality of physical parameter vectors in N-dimensional color-color space. These techniques are applied to a test-case, a UV/optical imager with space for five filters, with the goal of measuring the properties of local stars through to distant galaxies.

  8. Design, optimization and fabrication of an optical mode filter for integrated optics.

    PubMed

    Magnin, Vincent; Zegaoui, Malek; Harari, Joseph; Franois, Marc; Decoster, Didier

    2009-04-27

    We present the design, optimization, fabrication and characterization of an optical mode filter, which attenuates the snaking behavior of light caused by a lateral misalignment of the input optical fiber relative to an optical circuit. The mode filter is realized as a bottleneck section inserted in an optical waveguide in front of a branching element. It is designed with Bzier curves. Its effect, which depends on the optical state of polarization, is experimentally demonstrated by investigating the equilibrium of an optical splitter, which is greatly improved however only in TM mode. The measured optical losses induced by the filter are 0.28 dB. PMID:19399117

  9. Design and optimization of high reflectance graded index optical filter with quintic apodization

    NASA Astrophysics Data System (ADS)

    Praveen Kumar, Vemuri S. R. S.; Sunita, Parinam; Kumar, Mukesh; Rao, Parinam Krishna; Kumari, Neelam; Karar, Vinod; Sharma, Amit L.

    2015-06-01

    Rugate filters are a special kind of graded-index films that may provide advantages in both, optical performance and mechanical properties of the optical coatings. In this work, design and optimization of a high reflection rugate filter having reflection peak at 540nm has been presented which has been further optimized for side-lobe suppression. A suitable number of apodization and matching layers, generated through Quintic function, were added to the basic sinusoidal refractive index profile to achieve high reflectance of around 80% in the rejection window for normal incidence. Smaller index contrast between successive layers in the present design leads to less residual stress in the thinfilm stack which enhances the adhesion and mechanical strength of the filter. The optimized results show excellent side lobe suppression achieved around the stopband.

  10. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486

  11. Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

    USGS Publications Warehouse

    Safak, Erdal

    1989-01-01

    This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

  12. Optimized design of N optical filters for color and polarization imaging.

    PubMed

    Tu, Xingzhou; Pau, Stanley

    2016-02-01

    Designs of N optical filters for color and polarization imaging are found by minimizing detector noise, photon shot noise, and interpolation error for the image acquisition in a division of focal plane configuration. To minimize interpolation error, a general tiling procedure and an optimized tiling pattern for N filters are presented. For multispectral imaging, a general technique to find the transmission band is presented. For full Stokes polarization imaging, the general design with optimized retardances and fast angles of the polarizers is compared with the solution of the Thomson problem. These results are applied to the design of a three-color full Stokes imaging camera. PMID:26906867

  13. Optimization of high-channel-count fiber Bragg grating filters design with low dispersion

    NASA Astrophysics Data System (ADS)

    Jiang, Hao; Chen, Jing; Liu, Tundong

    2015-02-01

    An optimization-based technique for high-channel-count fiber Bragg grating (FBG) filter synthesis is proposed. The approach is based on utilizing a tailored group delay to construct a mathematical optimization model. In the objective function, both the maximum index modulation and the dispersion of FBG must be optimized simultaneously. An effective evolutionary algorithm, the differential evolution (DE) algorithm, is applied to find the optimal group delay parameter. Design examples demonstrate that the proposed approach yields a remarkable reduction in maximum index modulation with low dispersion in each channel.

  14. Non-dominated sorting genetic algorithm in optimizing ninth order multiple feedback Chebyshev low pass filter

    NASA Astrophysics Data System (ADS)

    Lim, Wei Jer; Neoh, Siew Chin; Norizan, Mohd Natashah; Mohamad, Ili Salwani

    2015-05-01

    Optimization for complex circuit design often requires large amount of manpower and computational resources. In order to optimize circuit performance, it is critical not only for circuit designers to adjust the component value but also to fulfill objectives such as gain, cutoff frequency, ripple and etc. This paper proposes Non-dominated Sorting Genetic Algorithm II (NSGA-II) to optimize a ninth order multiple feedback Chebyshev low pass filter. Multi-objective Pareto-Based optimization is involved whereby the research aims to obtain the best trade-off for minimizing the pass-band ripple, maximizing the output gain and achieving the targeted cut-off frequency. The developed NSGA-II algorithm is executed on the NGSPICE circuit simulator to assess the filter performance. Overall results show satisfactory in the achievements of the required design specifications.

  15. Optimal matched filter design for ultrasonic NDE of coarse grain materials

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Hayward, Gordon

    2016-02-01

    Coarse grain materials are widely used in a variety of key industrial sectors like energy, oil and gas, and aerospace due to their attractive properties. However, when these materials are inspected using ultrasound, the flaw echoes are usually contaminated by high-level, correlated grain noise originating from the material microstructures, which is time-invariant and demonstrates similar spectral characteristics as flaw signals. As a result, the reliable inspection of such materials is highly challenging. In this paper, we present a method for reliable ultrasonic non-destructive evaluation (NDE) of coarse grain materials using matched filters, where the filter is designed to approximate and match the unknown defect echoes, and a particle swarm optimization (PSO) paradigm is employed to search for the optimal parameters in the filter response with an objective to maximise the output signal-to-noise ratio (SNR). Experiments with a 128-element 5MHz transducer array on mild steel and INCONEL Alloy 617 samples are conducted, and the results confirm that the SNR of the images is improved by about 10-20 dB if the optimized matched filter is applied to all the A-scan waveforms prior to image formation. Furthermore, the matched filter can be implemented in real-time with low extra computational cost.

  16. Nonlinear optimal filter technique for analyzing energy depositions in TES sensors driven into saturation

    NASA Astrophysics Data System (ADS)

    Shank, B.; Yen, J. J.; Cabrera, B.; Kreikebaum, J. M.; Moffatt, R.; Redl, P.; Young, B. A.; Brink, P. L.; Cherry, M.; Tomada, A.

    2014-11-01

    We present a detailed thermal and electrical model of superconducting transition edge sensors (TESs) connected to quasiparticle (qp) traps, such as the W TESs connected to Al qp traps used for CDMS (Cryogenic Dark Matter Search) Ge and Si detectors. We show that this improved model, together with a straightforward time-domain optimal filter, can be used to analyze pulses well into the nonlinear saturation region and reconstruct absorbed energies with optimal energy resolution.

  17. Optimized split-step method for modeling nonlinear pulse propagation in fiber Bragg gratings

    SciTech Connect

    Toroker, Zeev; Horowitz, Moshe

    2008-03-15

    We present an optimized split-step method for solving nonlinear coupled-mode equations that model wave propagation in nonlinear fiber Bragg gratings. By separately controlling the spatial and the temporal step size of the solution, we could significantly decrease the run time duration without significantly affecting the result accuracy. The accuracy of the method and the dependence of the error on the algorithm parameters are studied in several examples. Physical considerations are given to determine the required resolution.

  18. A two-step crushed lava rock filter unit for grey water treatment at household level in an urban slum.

    PubMed

    Katukiza, A Y; Ronteltap, M; Niwagaba, C B; Kansiime, F; Lens, P N L

    2014-01-15

    Decentralised grey water treatment in urban slums using low-cost and robust technologies offers opportunities to minimise public health risks and to reduce environmental pollution caused by the highly polluted grey water i.e. with a COD and N concentration of 3000-6000mgL(-1) and 30-40mgL(-1), respectively. However, there has been very limited action research to reduce the pollution load from uncontrolled grey water discharge by households in urban slums. This study was therefore carried out to investigate the potential of a two-step filtration process to reduce the grey water pollution load in an urban slum using a crushed lava rock filter, to determine the main filter design and operation parameters and the effect of intermittent flow on the grey water effluent quality. A two-step crushed lava rock filter unit was designed and implemented for use by a household in the Bwaise III slum in Kampala city (Uganda). It was monitored at a varying hydraulic loading rate (HLR) of 0.5-1.1md(-1) as well as at a constant HLR of 0.39md(-1). The removal efficiencies of COD, TP and TKN were, respectively, 85.9%, 58% and 65.5% under a varying HLR and 90.5%, 59.5% and 69%, when operating at a constant HLR regime. In addition, the log removal of Escherichia coli, Salmonella spp. and total coliforms was, respectively, 3.8, 3.2 and 3.9 under the varying HLR and 3.9, 3.5 and 3.9 at a constant HLR. The results show that the use of a two-step filtration process as well as a lower constant HLR increased the pollutant removal efficiencies. Further research is needed to investigate the feasibility of adding a tertiary treatment step to increase the nutrients and microorganisms removal from grey water. PMID:24388927

  19. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  20. Plate/shell topological optimization subjected to linear buckling constraints by adopting composite exponential filtering function

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang

    2015-11-01

    In this paper, a model of topology optimization with linear buckling constraints is established based on an independent and continuous mapping method to minimize the plate/shell structure weight. A composite exponential function (CEF) is selected as filtering functions for element weight, the element stiffness matrix and the element geometric stiffness matrix, which recognize the design variables, and to implement the changing process of design variables from "discrete" to "continuous" and back to "discrete". The buckling constraints are approximated as explicit formulations based on the Taylor expansion and the filtering function. The optimization model is transformed to dual programming and solved by the dual sequence quadratic programming algorithm. Finally, three numerical examples with power function and CEF as filter function are analyzed and discussed to demonstrate the feasibility and efficiency of the proposed method.

  1. A three-step test of phosphate sorption efficiency of potential agricultural drainage filter materials.

    PubMed

    Lyngsie, G; Borggaard, O K; Hansen, H C B

    2014-03-15

    Phosphorus (P) eutrophication of lakes and streams, coming from drained farmlands, is a serious problem in areas with intensive agriculture. Installation of P sorbing filters at drain outlets may be a solution. Efficient sorbents to be used for such filters must possess high P bonding affinity to retain ortho-phosphate (Pi) at low concentrations. In addition high P sorption capacity, fast bonding and low desorption is necessary. In this study five potential filter materials (Filtralite-P(), limestone, calcinated diatomaceous earth, shell-sand and iron-oxide based CFH) in four particle size intervals were investigated under field relevant P concentrations (0-161?M) and retentions times of 0-24min. Of the five materials examined, the results from P sorption and desorption studies clearly demonstrate that the iron based CFH is superior as a filter material compared to calcium based materials when tested against criteria for sorption affinity, capacity and stability. The finest CFH and Filtralite-P() fractions (0.05-0.5mm) were best with P retention of ?90% of Pi from an initial concentration of 161?M corresponding to 14.5mmol/kg sorbed within 24min. They were further capable to retain ?90% of Pi from an initially 16?M solution within 1 min. However, only the finest CFH fraction was also able to retain ?90% of Pi sorbed from the 16?M solution against 4 times desorption sequences with 6mM KNO3. Among the materials investigated, the finest CFH fraction is therefore the only suitable filter material, when very fast and strong bonding of high Pi concentrations is needed, e.g. in drains under P rich soils during extreme weather conditions. PMID:24275107

  2. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  3. Image quality and dose optimization using novel x-ray source filters tailored to patient size

    NASA Astrophysics Data System (ADS)

    Toth, Thomas L.; Cesmeli, Erdogan; Ikhlef, Aziz; Horiuchi, Tetsuya

    2005-04-01

    The expanding set of CT clinical applications demands increased attention to obtaining the maximum image quality at the lowest possible dose. Pre-patient beam shaping filters provide an effective means to improve dose utilization. In this paper we develop and apply characterization methods that lead to a set of filters appropriately matched to the patient. We developed computer models to estimate image noise and a patient size adjusted CTDI dose. The noise model is based on polychromatic X-ray calculations. The dose model is empirically derived by fitting CTDI style dose measurements for a demographically representative set of phantom sizes and shapes with various beam shaping filters. The models were validated and used to determine the optimum IQ vs dose for a range of patient sizes. The models clearly show that an optimum beam shaping filter exists as a function of object diameter. Based on noise and dose alone, overall dose efficiency advantages of 50% were obtained by matching the filter shape to the size of the object. A set of patient matching filters are used in the GE LightSpeed VCT and Pro32 to provide a practical solution for optimum image quality at the lowest possible dose over the range of patient sizes and clinical applications. Moreover, these filters mark the beginning of personalized medicine where CT scanner image quality and radiation dose utilization is truly individualized and optimized to the patient being scanned.

  4. Performance optimization of total momentum filtering double-resonance energy selective electron heat pump

    NASA Astrophysics Data System (ADS)

    Ding, Ze-Min; Chen, Lin-Gen; Ge, Yan-Lin; Sun, Feng-Rui

    2016-04-01

    A theoretical model for energy selective electron (ESE) heat pumps operating with two-dimensional electron reservoirs is established in this study. In this model, a double-resonance energy filter operating with a total momentum filtering mechanism is considered for the transmission of electrons. The optimal thermodynamic performance of the ESE heat pump devices is also investigated. Numerical calculations show that the heating load of the device with two resonances is larger, whereas the coefficient of performance (COP) is lower than the ESE heat pump when considering a single-resonance filter. The performance characteristics of the ESE heat pumps in the total momentum filtering condition are generally superior to those with a conventional filtering mechanism. In particular, the performance characteristics of the ESE heat pumps considering a conventional filtering mechanism are vastly different from those of a device with total momentum filtering, which is induced by extra electron momentum in addition to the horizontal direction. Parameters such as resonance width and energy spacing are found to be associated with the performance of the electron system.

  5. Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1994-01-01

    Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

  6. Strengthening the revenue cycle: a 4-step method for optimizing payment.

    PubMed

    Clark, Jonathan J

    2008-10-01

    Four steps for enhancing the revenue cycle to ensure optimal payment are: *Establish key performance indicator dashboards in each department that compare current with targeted performance; *Create proper organizational structures for each department; *Ensure that high-performing leaders are hired in all management and supervisory positions; *Implement efficient processes in underperforming operations. PMID:18839662

  7. Design Optimization of Vena Cava Filters: An application to dual filtration devices

    SciTech Connect

    Singer, M A; Wang, S L; Diachin, D P

    2009-12-03

    Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.

  8. Design and optimization of a harmonic probe with step cross section in multifrequency atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Cai, Jiandong; Wang, Michael Yu; Zhang, Li

    2015-12-01

    In multifrequency atomic force microscopy (AFM), probe's characteristic of assigning resonance frequencies to integer harmonics results in a remarkable improvement of detection sensitivity at specific harmonic components. The selection criterion of harmonic order is based on its amplitude's sensitivity on material properties, e.g., elasticity. Previous studies on designing harmonic probe are unable to provide a large design capability along with maintaining the structural integrity. Herein, we propose a harmonic probe with step cross section, in which it has variable width in top and bottom steps, while the middle step in cross section is kept constant. Higher order resonance frequencies are tailored to be integer times of fundamental resonance frequency. The probe design is implemented within a structural optimization framework. The optimally designed probe is micromachined using focused ion beam milling technique, and then measured with an AFM. The measurement results agree well with our resonance frequency assignment requirement.

  9. Optimization of a 90Sr/90Y radiation source train stepping for intravascular brachytherapy.

    PubMed

    Miften, Moyed M; Das, Shiva K; Shafman, Timothy D; Marks, Lawrence B

    2002-12-01

    A steepest-descent gradient algorithm is developed to optimize the stepping of a 90Sr/90Y radiation source train (RST) for intravascular brachytherapy (IVB). The objective function is to deliver a uniform dose in a coronary target vessel and minimize the dose in adjacent normal vessel tissue at the proximal and distal edges of the coronary target vessel. Based on the target length and number of dwell points (number of steps), the algorithm modulates the dwell times and corresponding dwell positions that optimize the weighted addition of staggered EGS4 Monte Carlo (MC) calculated dose distribution from a single RST. Stepping treatment plans are generated for target vessel lengths of 3.0, 3.3, and 3.8 cm. For both the unoptimized and optimized plans, the dose heterogeneity in the target vessel wall, and length of nontarget vessel receiving 3 Gy, is assessed to compare plans. Optimization results show a 14% dose uniformity within the target is achievable for all vessel lengths. Further, the dose in the adjacent normal tissue is lower in the optimized plans than the unoptimized plans. The work presented in this paper provides a model to address the finite length of RST in IVB treatments. While the results presented are specific to the 90Sr/90Y RST, the methods should apply to other finite length RSTs. PMID:12512724

  10. Optimal band selection in hyperspectral remote sensing of aquatic benthic features: a wavelet filter window approach

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.

    2006-09-01

    This paper describes a wavelet based approach to derivative spectroscopy. The approach is utilized to select, through optimization, optimal channels or bands to use as derivative based remote sensing algorithms. The approach is applied to airborne and modeled or synthetic reflectance signatures of environmental media and features or objects within such media, such as benthic submerged vegetation canopies. The technique can also applied to selected pixels identified within a hyperspectral image cube obtained from an board an airborne, ground based, or subsurface mobile imaging system. This wavelet based image processing technique is an extremely fast numerical method to conduct higher order derivative spectroscopy which includes nonlinear filter windows. Essentially, the wavelet filter scans a measured or synthetic signature in an automated sequential manner in order to develop a library of filtered spectra. The library is utilized in real time to select the optimal channels for direct algorithm application. The unique wavelet based derivative filtering technique makes us of a translating, and dilating derivative spectroscopy signal processing (TDDS-SP (R)) approach based upon remote sensing science and radiative transfer processes unlike other signal processing techniques applied to hyperspectral signatures.

  11. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  12. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  13. Global localization of 3D anatomical structures by pre-filtered Hough forests and discrete optimization.

    PubMed

    Donner, René; Menze, Bjoern H; Bischof, Horst; Langs, Georg

    2013-12-01

    The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates' weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450

  14. Hair enhancement in dermoscopic images using dual-channel quaternion tubularness filters and MRF-based multilabel optimization.

    PubMed

    Mirzaalian, Hengameh; Lee, Tim K; Hamarneh, Ghassan

    2014-12-01

    Hair occlusion is one of the main challenges facing automatic lesion segmentation and feature extraction for skin cancer applications. We propose a novel method for simultaneously enhancing both light and dark hairs with variable widths, from dermoscopic images, without the prior knowledge of the hair color. We measure hair tubularness using a quaternion color curvature filter. We extract optimal hair features (tubularness, scale, and orientation) using Markov random field theory and multilabel optimization. We also develop a novel dual-channel matched filter to enhance hair pixels in the dermoscopic images while suppressing irrelevant skin pixels. We evaluate the hair enhancement capabilities of our method on hair-occluded images generated via our new hair simulation algorithm. Since hair enhancement is an intermediate step in a computer-aided diagnosis system for analyzing dermoscopic images, we validate our method and compare it to other methods by studying its effect on: 1) hair segmentation accuracy; 2) image inpainting quality; and 3) image classification accuracy. The validation results on 40 real clinical dermoscopic images and 94 synthetic data demonstrate that our approach outperforms competing hair enhancement methods. PMID:25312927

  15. An optimal linear filter for the reduction of noise superimposed to the EEG signal.

    PubMed

    Bartoli, F; Cerutti, S

    1983-10-01

    In the present paper a procedure for the reduction of super-imposed noise on EEG tracings is described, which makes use of linear digital filtering and identification methods. In particular, an optimal filter (a Kalman filter) has been developed which is intended to capture the disturbances of the electromyographic noise on the basis of an a priori modelling which considers a series of impulses with a temporal occurrence according to a Poisson distribution as a noise generating mechanism. The experimental results refer to the EEG tracings recorded from 20 patients in normal resting conditions: the procedure consists of a preprocessing phase (which uses also a low-pass FIR digital filter), followed by the implementation of the identification and the Kalman filter. The performance of the filters is satisfactory also from the clinical standpoint, obtaining a marked reduction of noise without distorting the useful information contained in the signal. Furthermore, when using the introduced method, the EEG signal generating mechanism is accordingly parametrized as AR/ARMA models, thus obtaining an extremely sensitive feature extraction with interesting and not yet completely studied pathophysiological meanings. The above procedure may find a general application in the field of noise reduction and the better enhancement of information contained in the wide set of biological signals. PMID:6632838

  16. Design and optimization of stepped austempered ductile iron using characterization techniques

    SciTech Connect

    Hernández-Rivera, J.L.; Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J.

    2013-09-15

    Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.

  17. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  18. Fast automatic estimation of the optimization step size for nonrigid image registration

    NASA Astrophysics Data System (ADS)

    Qiao, Y.; Lelieveldt, B. P. F.; Staring, M.

    2014-03-01

    Image registration is often used in the clinic, for example during radiotherapy and image-guide surgery, but also for general image analysis. Currently, this process is often very slow, yet for intra-operative procedures the speed is crucial. For intensity-based image registration, a nonlinear optimization problem should be solved, usually by (stochastic) gradient descent. This procedure relies on a proper setting of a parameter which controls the optimization step size. This parameter is difficult to choose manually however, since it depends on the input data, optimization metric and transformation model. Previously, the Adaptive Stochastic Gradient Descent (ASGD) method has been proposed that automatically chooses the step size, but it comes at high computational cost. In this paper, we propose a new computationally efficient method to automatically determine the step size, by considering the observed distribution of the voxel displacements between iterations. A relation between the step size and the expectation and variance of the observed distribution is then derived. Experiments have been performed on 3D lung CT data (19 patients) using a nonrigid B-spline transformation model. For all tested dissimilarity metrics (mean squared distance, normalized correlation, mutual information, normalized mutual information), we obtained similar accuracy as ASGD. Compared to ASGD whose estimation time is progressively increasing with the number of parameters, the estimation time of the proposed method is substantially reduced to an almost constant time, from 40 seconds to no more than 1 second when the number of parameters is 105.

  19. On optimal filtering of GPS dual frequency observations without using orbit information

    NASA Technical Reports Server (NTRS)

    Eueler, Hans-Juergen; Goad, Clyde C.

    1991-01-01

    The concept of optimal filtering of observations collected with a dual frequency GPS P-code receiver is investigated in comparison to an approach for C/A-code units. The filter presented here uses only data gathered between one receiver and one satellite. The estimated state vector consists of a one-way pseudorange, ionospheric influence, and ambiguity biases. Neither orbit information nor station information is required. The independently estimated biases are used to form double differences where, in case of a P-code receiver, the wide lane integer ambiguities are usually recovered successfully except when elevation angles are very small. An elevation dependent uncertainty for pseudorange measurements was discovered for different receiver types. An exponential model for the pseudorange uncertainty was used with success in the filter gain computations.

  20. Design of FIR Filters with Discrete Coefficients using Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Shuntaro; Suyama, Kenji

    In this paper, we propose a new design method for linear phase FIR (Finite Impulse Response) filters with discrete coefficients. In a hardware implementation, filter coefficients must be represented as discrete values. The design problem of digital filters with discrete coefficients is formulated as the integer programming problem. Then, an enormous amount of computational time is required to solve the problem in a strict solver. Recently, ACO (Ant Colony Optimization) which is one heuristic approach, is used widely for solving combinational problem like the traveling salesman problem. In our method, we formulate the design problem as the 0-1 integer programming problem and solve it by using the ACO. Several design examples are shown to present effectiveness of the proposed method.

  1. Optimal Design of CSD Coefficient FIR Filters Subject to Number of Nonzero Digits

    NASA Astrophysics Data System (ADS)

    Ozaki, Yuichi; Suyama, Kenji

    In a hardware implementation of FIR(Finite Impulse Response) digital filters, it is desired to reduce a total number of nonzero digits used for a representation of filter coefficients. In general, a design problem of FIR filters with CSD(Canonic Signed Digit) representation, which is efficient one for the reduction of numbers of multiplier units, is often considered as one of the 0-1 combinational problems. In such the problem, some difficult constraints make us prevent to linearize the problem. Although many kinds of heuristic approaches have been applied to solve the problem, the solution obtained by such a manner could not guarantee its optimality. In this paper, we attempt to formulate the design problem as the 0-1 mixed integer linear programming problem and solve it by using the branch and bound technique, which is a powerful method for solving integer programming problem. Several design examples are shown to present an efficient performance of the proposed method.

  2. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method

    PubMed Central

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-01-01

    ABSTRACT In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8–2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75–150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  3. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method.

    PubMed

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-12-01

    In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8-2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75-150?days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  4. Optimal filtering of gear signals for early damage detection based on the spectral kurtosis

    NASA Astrophysics Data System (ADS)

    Combet, F.; Gelman, L.

    2009-04-01

    In this paper, we propose a methodology for the enhancement of small transients in gear vibration signals in order to detect local tooth faults, such as pitting, at an early stage of damage. We propose to apply the optimal denoising (Wiener) filter based on the spectral kurtosis (SK). The originality is to estimate and apply this filter to the gear residual signal, as classically obtained after removing the mesh harmonics from the time synchronous average (TSA). This presents several advantages over the direct estimation from the raw vibration signal: improved signal/noise ratio, reduced interferences from other stages of the gearbox and easier detection of excited structural resonance(s) within the range of the mesh harmonic components. From the SK-based filtered residual signal, called SK-residual, we define the local power as the smoothed squared envelope, which reflects both the energy and the degree of non-stationarity of the fault-induced transients. The methodology is then applied to an industrial case and shows the possibility of detection of relatively small tooth surface pitting (less than 10%) in a two-stage helical reduction gearbox. The adjustment of the resolution for the SK estimation appears to be optimal when the length of the analysis window is approximately matched with the mesh period of the gear. The proposed approach is also compared to an inverse filtering (blind deconvolution) approach. However, the latter turns out to be more unstable and sensitive to noise and shows a lower degree of separation, quantified by the Fisher criterion, between the estimated diagnostic features in the pitted and unpitted cases. Thus, the proposed optimal filtering methodology based on the SK appears to be well adapted for the early detection of local tooth damage in gears.

  5. Optimized SU-8 UV-lithographical process for a Ka-band filter fabrication

    NASA Astrophysics Data System (ADS)

    Jin, Peng; Jiang, Kyle; Tan, Jiubin; Lancaster, M. J.

    2005-04-01

    Rapidly expanding of millimeter wave communication has made Ka-band filter fabrication to gain more and more attention from the researcher. Described in this paper is a high quality UV-lithographic process for making high aspect ratio parts of a coaxial Ka band dual mode filter using an ultra-thick SU-8 photoresist layer, which has a potential application in LMDS systems. Due to the strict requirements on the perpendicular geometry of the filter parts, the microfabrication research work has been concentrated on modifying the SU-8 UV-lithographical process to improve the vertical angle of sidewalls and high aspect ratio. Based on the study of the photoactive property of ultra-thick SU-8 layers, an optimized prebake time has been found for obtaining the minimum UV absorption by SU-8. The optimization principle has been tested using a series of experiments of UV-lithography on different prebake times, and proved effective. An optimized SU-8 UV-lithographical process has been developed for the fabrication of thick layer filter structures. During the test fabrication, microstructures with aspect ratio as high as 40 have been produced in 1000 mm ultra-thick SU-8 layers using the standard UV-lithography equipment. The sidewall angles are controlled between 85~90 degrees. The high quality SU-8 structures will then be used as positive moulds for producing copper structures using electroforming process. The microfabication process presented in this paper suits the proposed filter well. It also reveals a good potential for volume production of high quality RF devices.

  6. Optimal design of a bank of spatio-temporal filters for EEG signal classification.

    PubMed

    Higashi, Hiroshi; Tanaka, Toshihisa

    2011-01-01

    The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery. PMID:22255731

  7. Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1992-01-01

    Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

  8. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.

  9. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.

  10. Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Qiu, Ping

    2015-05-01

    Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration.

  11. AFM tip characterization by using FFT filtered images of step structures.

    PubMed

    Yan, Yongda; Xue, Bo; Hu, Zhenjiang; Zhao, Xuesen

    2016-01-01

    The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods. PMID:26517548

  12. Optimization of single-step tapering amplitude and energy detuning for high-gain FELs

    NASA Astrophysics Data System (ADS)

    Li, He-Ting; Jia, Qi-Ka

    2015-01-01

    We put forward a method to optimize the single-step tapering amplitude of undulator strength and initial energy tuning of electron beam to maximize the saturation power of high gain free-electron lasers (FELs), based on the physics of longitudinal electron beam phase space. Using the FEL simulation code GENESIS, we numerically demonstrate the accuracy of the estimations for parameters corresponding to the linac coherent light source and the Tesla test facility.

  13. Optimization of a Permanent Step Mold Design for Mg Alloy Castings

    NASA Astrophysics Data System (ADS)

    Timelli, Giulio; Capuzzi, Stefano; Bonollo, Franco

    2015-02-01

    The design of a permanent Step mold for the evaluation of the mechanical properties of light alloys has been reviewed. An optimized Step die with a different runner and gating systems is proposed to minimize the amount of casting defects. Numerical simulations have been performed to study the filling and solidification behavior of an AM60B alloy to predict the turbulence of the melt and the microshrinkage formation. The results reveal how a correct design of the trap in the runners prevents the backwave of molten metal, which could eventually reverse out and enter the die cavity. The tapered runner in the optimized die configuration gently leads the molten metal to the ingate, avoiding turbulence and producing a balanced die cavity filling. The connection between the runner system and the die cavity by means of a fan ingate produces a laminar filling in contrast with a finger-type ingate. Solidification defects such as shrinkage-induced microporosity, numerically predicted through a dimensionless version of the Niyama criterion, are considerably reduced in the optimized permanent Step mold.

  14. Novel tools for stepping source brachytherapy treatment planning: Enhanced geometrical optimization and interactive inverse planning

    SciTech Connect

    Dinkla, Anna M. Laarse, Rob van der; Koedooder, Kees; Petra Kok, H.; Wieringen, Niek van; Pieters, Bradley R.; Bel, Arjan

    2015-01-15

    Purpose: Dose optimization for stepping source brachytherapy can nowadays be performed using automated inverse algorithms. Although much quicker than graphical optimization, an experienced treatment planner is required for both methods. With automated inverse algorithms, the procedure to achieve the desired dose distribution is often based on trial-and-error. Methods: A new approach for stepping source prostate brachytherapy treatment planning was developed as a quick and user-friendly alternative. This approach consists of the combined use of two novel tools: Enhanced geometrical optimization (EGO) and interactive inverse planning (IIP). EGO is an extended version of the common geometrical optimization method and is applied to create a dose distribution as homogeneous as possible. With the second tool, IIP, this dose distribution is tailored to a specific patient anatomy by interactively changing the highest and lowest dose on the contours. Results: The combined use of EGO–IIP was evaluated on 24 prostate cancer patients, by having an inexperienced user create treatment plans, compliant to clinical dose objectives. This user was able to create dose plans of 24 patients in an average time of 4.4 min/patient. An experienced treatment planner without extensive training in EGO–IIP also created 24 plans. The resulting dose-volume histogram parameters were comparable to the clinical plans and showed high conformance to clinical standards. Conclusions: Even for an inexperienced user, treatment planning with EGO–IIP for stepping source prostate brachytherapy is feasible as an alternative to current optimization algorithms, offering speed, simplicity for the user, and local control of the dose levels.

  15. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (?p), stopband error (?s), transition band error (?t), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647

  16. An optimized item-based collaborative filtering recommendation algorithm based on item genre prediction

    NASA Astrophysics Data System (ADS)

    Zhang, De-Jia

    2009-07-01

    With the fast development of Internet, many systems have emerged in e-commerce applications to support the product recommendation. Collaborative filtering is one of the most promising techniques in recommender systems, providing personalized recommendations to users based on their previously expressed preferences in the form of ratings and those of other similar users. In practice, with the adding of user and item scales, user-item ratings are becoming extremely sparsity and recommender systems utilizing traditional collaborative filtering are facing serious challenges. To address the issue, this paper presents an approach to compute item genre similarity, through mapping each item with a corresponding descriptive genre, and computing similarity between genres as similarity, then make basic predictions according to those similarities to lower sparsity of the user-item ratings. After that, item-based collaborative filtering steps are taken to generate predictions. Compared with previous methods, the presented collaborative filtering employs the item genre similarity can alleviate the sparsity issue in the recommender systems, and can improve accuracy of recommendation.

  17. Implicit application of polynomial filters in a k-step Arnoldi method

    NASA Technical Reports Server (NTRS)

    Sorensen, D. C.

    1990-01-01

    The Arnoldi process is a well known technique for approximating a few eigenvalues and corresponding eigenvectors of a general square matrix. Numerical difficulties such as loss of orthogonality and assessment of the numerical quality of the approximations as well as a potential for unbounded growth in storage have limited the applicability of the method. These issues are addressed by fixing the number of steps in the Arnoldi process at a prescribed value k and then treating the residual vector as a function of the initial Arnoldi vector. This starting vector is then updated through an iterative scheme that is designed to force convergence of the residual to zero. The iterative scheme is shown to be a truncation of the standard implicitly shifted QR-iteration for dense problems and it avoids the need to explicitly restart the Arnoldi sequence. The main emphasis of this paper is on the derivation and analysis of this scheme. However, there are obvious ways to exploit parallelism through the matrix-vector operations that comprise the majority of the work in the algorithm. Preliminary computational results are given for a few problems on some parallel and vector computers.

  18. Optimal hydrograph separation filter to evaluate transport routines of hydrological models

    NASA Astrophysics Data System (ADS)

    Rimmer, Alon; Hartmann, Andreas

    2014-05-01

    Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is an attempt to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to develop a benchmark model that can be used as a geochemical model itself, or to test the performance of process based hydro-geochemical models. The benchmark model quantifies the degree of knowledge that the stream flow time series itself contributes to the hydrochemical analysis. Results of the OHS show that the two HS fractions ("rapid" and "slow") differ according to the geochemical substances which were selected. The OHS parameters were then used to demonstrate how to develop benchmark model for hydro-chemical predictions. Finally, predictions of solute transport from a process-based hydrological model were compared to the proposed benchmark model. Our results indicate that the benchmark model illustrated and quantified the contribution of the modeling procedure better than only using traditional measures like r2 or the Nash-Sutcliffe efficiency.

  19. Optimized particle-mesh Ewald/multiple-time step integration for molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Batcho, Paul F.; Case, David A.; Schlick, Tamar

    2001-09-01

    We develop an efficient multiple time step (MTS) force splitting scheme for biological applications in the AMBER program in the context of the particle-mesh Ewald (PME) algorithm. Our method applies a symmetric Trotter factorization of the Liouville operator based on the position-Verlet scheme to Newtonian and Langevin dynamics. Following a brief review of the MTS and PME algorithms, we discuss performance speedup and the force balancing involved to maximize accuracy, maintain long-time stability, and accelerate computational times. Compared to prior MTS efforts in the context of the AMBER program, advances are possible by optimizing PME parameters for MTS applications and by using the position-Verlet, rather than velocity-Verlet, scheme for the inner loop. Moreover, ideas from the Langevin/MTS algorithm LN are applied to Newtonian formulations here. The algorithm's performance is optimized and tested on water, solvated DNA, and solvated protein systems. We find CPU speedup ratios of over 3 for Newtonian formulations when compared to a 1 fs single-step Verlet algorithm using outer time steps of 6 fs in a three-class splitting scheme; accurate conservation of energies is demonstrated over simulations of length several hundred ps. With modest Langevin forces, we obtain stable trajectories for outer time steps up to 12 fs and corresponding speedup ratios approaching 5. We end by suggesting that modified Ewald formulations, using tailored alternatives to the Gaussian screening functions for the Coulombic terms, may allow larger time steps and thus further speedups for both Newtonian and Langevin protocols; such developments are reported separately.

  20. A Two-Step Double Filter Method to Extract Open Water Surfaces from Landsat ETM+ Imagery

    NASA Astrophysics Data System (ADS)

    Wang, Haijing; Kinzelbach, Wolfgang

    2010-05-01

    In arid and semi-arid areas, lakes and temporal ponds play a significant role in agriculture and livelihood of local communities as well as in ecology. Monitoring the changes of these open water bodies allows to draw conclusions on water use as well as climatic impacts and can assist in the formulation of a sustainable resource management strategy. The simultaneous monitoring of larger numbers of water bodies with respect to their stage and area is feasible with the aid of remote sensing. Here the monitoring of lake surface areas is discussed. Landsat TM and ETM+ images provide a medium resolution of 30m, and offer an easily available data source to monitor the long term changes of water surfaces in arid and semi-arid regions. In the past great effort was put into developing simple indices to extract water surfaces from satellite images. However, there is a common problem in achieving accurate results with these indices: How to select a threshold value for water pixels without introducing excessive subjective judgment. The threshold value would also have to vary with location, land features and seasons, allowing for inherent uncertainty. A new method was developed using Landsat ETM+ imaginary (30 meter resolution) to extract open water surfaces. This method uses the Normalized Difference of Vegetation Index (NDVI) as the basis for an objective way of selecting threshold values of Modified Normalized Difference of Water Index (MNDWI) and Stress Degree Days (SDD), which were used as a combined filter to extract open water surfaces. We choose two study areas to verify the method. One study area is in Northeast China, where bigger lakes, smaller muddy ponds and wetlands are interspersed with agricultural land and salt crusts. The other one is Kafue Flats in Zambia, where seasonal floods of the Zambezi River create seasonal wetlands in addition to the more permanent water ponds and river channels. For both sites digital globe images of 0.5 meter resolution are available, which were taken within a few days of Landsat passing dates and which will serve here as ground truth information. On their basis the new method was compared to other available methods for extracting water pixels. Compared to the other methods, the new method can extract water surface not only from deep lakes/reservoirs and wetlands but also from small mud ponds in alkali flats and irrigation ponds in the fields. For the big and deep lakes, the extracted boundary of the lakes fits accurately the observed boundary. Five test sites in the study area in Northeast China with only shallow water surfaces were chosen and tested. The extracted water surfaces were compared with each site's digital globe maps, respectively to determine the accuracy of the method. The comparison shows that the method could extract all completely wet pixels (water area covering 100% of the pixel area) in all test sites. For partially wet pixels (50-100% of pixel area), the model can detect 91% of all pixels. No dry pixels were mistaken by the model as water pixels. Keywords: Remote sensing, Landsat ETM+ imaginary, Water Surface, NDVI, MNDWI, and SDD

  1. Spatial filter and feature selection optimization based on EA for multi-channel EEG.

    PubMed

    Yubo Wang; Mohanarangam, Krithikaa; Mallipeddi, Rammohan; Veluvolu, K C

    2015-08-01

    The EEG signals employed for BCI systems are generally band-limited. The band-limited multiple Fourier linear combiner (BMFLC) with Kalman filter was developed to obtain amplitude estimates of the EEG signal in a pre-fixed frequency band in real-time. However, the high-dimensionality of the feature vector caused by the application of BMFLC to multi-channel EEG based BCI deteriorates the performance of the classifier. In this work, we apply evolutionary algorithm (EA) to tackle this problem. The real-valued EA encodes both the spatial filter and the feature selection into its solution and optimizes it with respect to the classification error. Three BMFLC based BCI configurations are proposed. Our results show that the BMFLC-KF with covariance matrix adaptation evolution strategy (CMAES) has the best overall performance. PMID:26736755

  2. Spatial join optimization among WFSs based on recursive partitioning and filtering rate estimation

    NASA Astrophysics Data System (ADS)

    Lan, Guiwen; Wu, Congcong; Shi, Guangyi; Chen, Qi; Yang, Zhao

    2015-12-01

    Spatial join among Web Feature Services (WFS) is time-consuming for most of non-candidate spatial objects may be encoded by GML and transferred to client side. In this paper, an optimization strategy is proposed to enhance performance of these joins by filtering non-candidate spatial objects as many as possible. By recursive partitioning, the data skew of sub-areas is facilitated to reduce data transmission using spatial semi-join. Moreover filtering rate is used to determine whether a spatial semi-join for a sub-area is profitable and choose a suitable execution plan for it. The experimental results show that the proposed strategy is feasible under most circumstances.

  3. Reducing nonlinear waveform distortion in IM/DD systems by optimized receiver filtering

    NASA Astrophysics Data System (ADS)

    Zhou, Y. R.; Watkins, L. R.

    1994-09-01

    Nonlinear waveform distortion caused by the combined effect of fiber chromatic dispersion, self-phase modulation, and amplifier noise limits the attainable performance of high bit-rate, long haul optically repeatered systems. Signal processing in the receiver is investigated and found to be effective in reducing the penalty caused by this distortion. Third order low pass filters, with and without a tapped delay line equalizer are considered. The pole locations or the tap weights are optimized with respect to a minimum bit error rate criterion which accommodates distortion, pattern effects, decision time, threshold setting and noise contributions. The combination of a third order Butterworth filter and a five-tap, fractionally spaced equalizer offers more than 4 dB benefit at 4000 km compared with conventional signal processing designs.

  4. Treatment of domestic sewage at low temperature in a two-anaerobic step system followed by a trickling filter.

    PubMed

    Elmitwalli, T A; van Lier, J; Zeeman, G; Lettinga, G

    2003-01-01

    The treatment of domestic sewage at low temperature was studied in a two-anaerobic-step system followed by an aerobic step, consisting of an anaerobic filter (AF) + an anaerobic hybrid (AH) + polyurethane-foam trickling filter (PTF). The AF+AH system was operated at a hydraulic retention time (HRT) of 3+6 h at a controlled temperature of 13 degrees C, while the PTF was operated without wastewater recirculation at different hydraulic loading rates (HLR) of 41, 15.4 and 2.6 m3/m2/d at ambient temperature (ca. 15-18 degrees C). The AF reactor removed the major part of the total and suspended COD, viz. 46 and 58% respectively. The AH reactor with granular sludge was efficient in the removal and conversion of the anaerobically biodegradable COD. The AF+AH system removed 63% of total COD and converted 46% of the influent total COD to methane. At a HLR of 41 m3/m2/d, the COD removal was limited in the PTF, while at HLR of 15.4 and 2.6 m3/m2/d, a high total COD removal of 54-57% was achieved without a significant difference between the two HLRs. The PTF was mainly efficient in the removal of particles (suspended and colloidal COD removal were 75-90% and 75-83% respectively), which were not removed in the anaerobic two-step. The overall total COD removal in the AF+AH+PTF system was 85%. Decreasing the HLR from 15.4 to 2.6 m3/m2/d, only increased the nitrification rate efficiency in the PTF from 22% to 60%. Also, at HLR of 15.4 and 2.6 m3/m2/d, PTF showed a similar removal for E. coli by about 2 log. Therefore, the effluent of AF+AH+PTF system can be utilised for restricted irrigation in order to close water and nutrients cycles. Moreover, such a system represents a high-load and a low-cost technology, which is a suitable solution for developing countries. PMID:14753537

  5. ?2 testing of optimal filters for gravitational wave signals: An experimental implementation

    NASA Astrophysics Data System (ADS)

    Baggio, L.; Cerdonio, M.; Ortolan, A.; Vedovato, G.; Taffarello, L.; Zendri, J.-P.; Bonaldi, M.; Falferi, P.; Martinucci, V.; Mezzena, R.; Prodi, G. A.; Vitale, S.

    2000-05-01

    We have implemented likelihood testing of the performance of an optimal filter within the online analysis of AURIGA, a sub-Kelvin resonant-bar gravitational wave detector. We demonstrate the effectiveness of this technique in discriminating between impulsive mechanical excitations of the resonant-bar and other spurious excitations. This technique also ensures the accuracy of the estimated parameters such as the signal-to-noise ratio. The efficiency of the technique to deal with nonstationary noise and its application to data from a network of detectors are also discussed.

  6. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    PubMed

    Suleimanov, Yury V; Green, William H

    2015-09-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms-the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes. PMID:26575920

  7. A one-step screening process for optimal alignment of (soft) colloidal particles

    NASA Astrophysics Data System (ADS)

    Hiltl, Stephanie; Oltmanns, Jens; Bker, Alexander

    2012-11-01

    We developed nanostructured gradient wrinkle surfaces to establish a one-step screening process towards optimal assembly of soft and hard colloidal particles (microgel systems and silica particles). Thereby, we simplify studies on the influence of wrinkle dimensions (wavelength, amplitude) on particle properties and their alignment. In a combinatorial experiment, we optimize particle assembly regarding the ratio of particle diameter vs. wrinkle wavelength and packing density and point out differences between soft and hard particles. The preparation of wrinkle gradients in oxidized top layers on elastic poly(dimethylsiloxane) (PDMS) substrates is based on a controlled wrinkling approach. Partial shielding of the substrate during plasma oxidation is crucial to obtain two-dimensional gradients with amplitudes ranging from 7 to 230 nm and wavelengths between 250 and 900 nm.We developed nanostructured gradient wrinkle surfaces to establish a one-step screening process towards optimal assembly of soft and hard colloidal particles (microgel systems and silica particles). Thereby, we simplify studies on the influence of wrinkle dimensions (wavelength, amplitude) on particle properties and their alignment. In a combinatorial experiment, we optimize particle assembly regarding the ratio of particle diameter vs. wrinkle wavelength and packing density and point out differences between soft and hard particles. The preparation of wrinkle gradients in oxidized top layers on elastic poly(dimethylsiloxane) (PDMS) substrates is based on a controlled wrinkling approach. Partial shielding of the substrate during plasma oxidation is crucial to obtain two-dimensional gradients with amplitudes ranging from 7 to 230 nm and wavelengths between 250 and 900 nm. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr32710d

  8. A simple procedure eliminating multiple optimization steps required in developing multiplex PCR reactions

    SciTech Connect

    Grondin, V.; Roskey, M.; Klinger, K.; Shuber, T.

    1994-09-01

    The PCR technique is one of the most powerful tools in modern molecular genetics and has achieved widespread use in the analysis of genetic diseases. Typically, a region of interest is amplified from genomic DNA or cDNA and examined by various methods of analysis for mutations or polymorphisms. In cases of small genes and transcripts, amplification of single, small regions of DNA are sufficient for analysis. However, when analyzing large genes and transcripts, multiple PCRs may be required to identify the specific mutation or polymorphism of interest. Ever since it has been shown that PCR could simultaneously amplify multiple loci in the human dystrophin gene, multiplex PCR has been established as a general technique. The properities of multiplex PCR make it a useful tool and preferable to simultaneous uniplex PCR in many instances. However, the steps for developing a multiplex PCR can be laborious, with significant difficulty in achieving equimolar amounts of several different amplicons. We have developed a simple method of primer design that has enabled us to eliminate a number of the standard optimization steps required in developing a multiplex PCR. Sequence-specific oligonucleotide pairs were synthesized for the simultaneous amplification of multiple exons within the CFTR gene. A common non-complementary 20 nucleotide sequence was attached to each primer, thus creating a mixture of primer pairs all containing a universal primer sequence. Multiplex PCR reactions were carried out containing target DNA, a mixture of several chimeric primer pairs and primers complementary to only the universal portion of the chimeric primers. Following optimization of conditions for the universal primer, limited optimization was needed for successful multiplex PCR. In contrast, significant optimization of the PCR conditions were needed when pairs of sequence specific primers were used together without the universal sequence.

  9. Modified patch-based locally optimal Wiener method for interferometric SAR phase filtering

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing

    2016-04-01

    This paper presents a modified patch-based locally optimal Wiener (PLOW) method for interferometric synthetic aperture radar (InSAR) phase filtering. PLOW is a linear minimum mean squared error (LMMSE) estimator based on a Gaussian additive noise condition. It jointly estimates moments, including mean and covariance, using a non-local technique. By using similarities between image patches, this method can effectively filter noise while preserving details. When applied to InSAR phase filtering, three modifications are proposed based on spatial variant noise. First, pixels are adaptively clustered according to their coherence magnitudes. Second, rather than a global estimator, a locally adaptive estimator is used to estimate noise covariance. Third, using the coherence magnitudes as weights, the mean of each cluster is estimated, using a weighted mean to further reduce noise. The performance of the proposed method is experimentally verified using simulated and real data. The results of our study demonstrate that the proposed method is on par or better than the non-local interferometric SAR (NL-InSAR) method.

  10. Towards Optimal Filtering on ARM for ATLAS Tile Calorimeter Front-End Processing

    NASA Astrophysics Data System (ADS)

    Cox, Mitchell A.

    2015-10-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. Advanced and characteristically expensive Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are currently used to process this quantity of data. It is proposed that a cost- effective, high data throughput Processing Unit (PU) can be developed by using several ARM System on Chips in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. ARM is a cost effective and energy efficient alternative CPU architecture to the long established x86 architecture. This PU could be used for a variety of high-level algorithms on the high data throughput raw data. An Optimal Filtering algorithm has been implemented in C++ and several ARM platforms have been tested. Optimal Filtering is currently used in the ATLAS Tile Calorimeter front-end for basic energy reconstruction and is currently implemented on DSPs.

  11. Optimizing the integrated pulsed amperometric multicycle step waveform for the determination of tetracyclines.

    PubMed

    Cai, Yu-e; Cai, Yaqi; Shi, Yali; Mou, Shifen; Lu, Yiqiang

    2006-06-16

    A method of modified integrated pulsed amperometric detection with multicycle step waveform (Multi-IPAD) following high-performance liquid chromatography (HPLC) was applied for the determination of tetracyclines (TCs) including dimethyltetracycline (DMTC), oxytetracycline (OTC) and tetracycline (TC). The key advantages of the Multi-IPAD are the abilities to enhance sensitivity and reproducibility and the ability to keep working electrode clean through the use of a high-frequent waveform alteration in integration step and the use of a cleaning potential, which is quite different from conventional three-step potential waveform. The analyses were carried out using the mobile phase of acetonitrile-water mixture solution (10:90, v/v) containing 1% perchloric acid on a C(18) column at a flow rate of 0.21 mL/min. The IPAD waveform parameters were optimized to maximize the signal-to-noise ratio (S/N) and successfully applied for the sensitive detection of TCs. The detection limits (S/N=3, 20 microL injected) were 0.07 mg/L for DMTC, 0.08 mg/L for OTC and 0.05 mg/L for TC. The peak height relative standard deviations (RSDs) of every compound for replicate injection (n=15) determined were below 4.6%. PMID:16359687

  12. Optimal hydrograph separation filter to evaluate transport routines of hydrological models

    NASA Astrophysics Data System (ADS)

    Rimmer, Alon; Hartmann, Andreas

    2014-06-01

    Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to benchmark the performance of process-based hydro-geochemical (HG) models. The new HG routine can be used to quantify the degree of knowledge that the stream flow time series itself contributes to the HG analysis, using newly developed benchmark geochemistry efficiency (BGE). Results of the OHS show that the two HS fractions (“rapid” and “slow”) differ according to the HG substances which were selected. The BFImax parameter (long-term ratio of baseflow to total streamflow) ranged from 0.26 to 0.94 for SO4-2 and total suspended solids, TSS, respectively. Then, predictions of SO4-2 transport from a process-based hydrological model were benchmarked with the proposed HG routine, in order to evaluate the significance of the HG routines in the process-based model. This comparison provides valuable quality test that would not be obvious when using the traditional measures like r2 or the NSE (Nash-Sutcliffe efficiency). The process-based model resulted in r2 = 0.65 and NSE = 0.65, while the benchmark routine results were slightly lower with r2 = 0.61 and NSE = 0.58. However, the comparison between the two model resulted in obvious advantage for the process-based model with BGE = 0.15.

  13. Rod-filter-field optimization of the J-PARC RF-driven H- ion source

    NASA Astrophysics Data System (ADS)

    Ueno, A.; Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2015-04-01

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H- ion beam of 60mA within normalized emittances of 1.5?mmmrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500?s25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H- ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H- ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM's gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H- ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM's cross-section (magnetmotive force) was indispensable for easy operation with the temperature (TPE) of the plasma electrode (PE) lower than 70C, which minimizes the transverse emittances. The 5% reduction of RFM's cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for TPE around 60C.

  14. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  15. Adaptation of a one-step worst-case optimal univariate algorithm of bi-objective Lipschitz optimization to multidimensional problems

    NASA Astrophysics Data System (ADS)

    ilinskas, Antanas; ilinskas, Julius

    2015-04-01

    A bi-objective optimization problem with Lipschitz objective functions is considered. An algorithm is developed adapting a univariate one-step optimal algorithm to multidimensional problems. The univariate algorithm considered is a worst-case optimal algorithm for Lipschitz functions. The multidimensional algorithm is based on the branch-and-bound approach and trisection of hyper-rectangles which cover the feasible region. The univariate algorithm is used to compute the Lipschitz bounds for the Pareto front. Some numerical examples are included.

  16. Graphics-processor-unit-based parallelization of optimized baseline wander filtering algorithms for long-term electrocardiography.

    PubMed

    Niederhauser, Thomas; Wyss-Balmer, Thomas; Haeberlin, Andreas; Marisa, Thanks; Wildhaber, Reto A; Goette, Josef; Jacomet, Marcel; Vogel, Rolf

    2015-06-01

    Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here, we present a graphics processor unit (GPU)-based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to autoregressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and four times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a seven-day high-resolution ECG is computed within less than 3 s. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced. PMID:25675449

  17. Optimization of a one-step heat-inducible in vivo mini DNA vector production system.

    PubMed

    Nafissi, Nafiseh; Sum, Chi Hong; Wettig, Shawn; Slavcev, Roderick A

    2014-01-01

    While safer than their viral counterparts, conventional circular covalently closed (CCC) plasmid DNA vectors offer a limited safety profile. They often result in the transfer of unwanted prokaryotic sequences, antibiotic resistance genes, and bacterial origins of replication that may lead to unwanted immunostimulatory responses. Furthermore, such vectors may impart the potential for chromosomal integration, thus potentiating oncogenesis. Linear covalently closed (LCC), bacterial sequence free DNA vectors have shown promising clinical improvements in vitro and in vivo. However, the generation of such minivectors has been limited by in vitro enzymatic reactions hindering their downstream application in clinical trials. We previously characterized an in vivo temperature-inducible expression system, governed by the phage ? pL promoter and regulated by the thermolabile ? CI[Ts]857 repressor to produce recombinant protelomerase enzymes in E. coli. In this expression system, induction of recombinant protelomerase was achieved by increasing culture temperature above the 37C threshold temperature. Overexpression of protelomerase led to enzymatic reactions, acting on genetically engineered multi-target sites called "Super Sequences" that serve to convert conventional CCC plasmid DNA into LCC DNA minivectors. Temperature up-shift, however, can result in intracellular stress responses and may alter plasmid replication rates; both of which may be detrimental to LCC minivector production. We sought to optimize our one-step in vivo DNA minivector production system under various induction schedules in combination with genetic modifications influencing plasmid replication, processing rates, and cellular heat stress responses. We assessed different culture growth techniques, growth media compositions, heat induction scheduling and temperature, induction duration, post-induction temperature, and E. coli genetic background to improve the productivity and scalability of our system, achieving an overall LCC DNA minivector production efficiency of ? 90%.We optimized a robust technology conferring rapid, scalable, one-step in vivo production of LCC DNA minivectors with potential application to gene transfer-mediated therapeutics. PMID:24586704

  18. Optimization of hydrolysis and volatile fatty acids production from sugarcane filter cake: Effects of urea supplementation and sodium hydroxide pretreatment.

    PubMed

    Janke, Leandro; Leite, Athaydes; Batista, Karla; Weinrich, Sören; Sträuber, Heike; Nikolausz, Marcell; Nelles, Michael; Stinner, Walter

    2016-01-01

    Different methods for optimization the anaerobic digestion (AD) of sugarcane filter cake (FC) with a special focus on volatile fatty acids (VFA) production were studied. Sodium hydroxide (NaOH) pretreatment at different concentrations was investigated in batch experiments and the cumulative methane yields fitted to a dual-pool two-step model to provide an initial assessment on AD. The effects of nitrogen supplementation in form of urea and NaOH pretreatment for improved VFA production were evaluated in a semi-continuously operated reactor as well. The results indicated that higher NaOH concentrations during pretreatment accelerated the AD process and increased methane production in batch experiments. Nitrogen supplementation resulted in a VFA loss due to methane formation by buffering the pH value at nearly neutral conditions (∼ 6.7). However, the alkaline pretreatment with 6g NaOH/100g FCFM improved both the COD solubilization and the VFA yield by 37%, mainly consisted by n-butyric and acetic acids. PMID:26278994

  19. Reliably Detecting Clinically Important Variants Requires Both Combined Variant Calls and Optimized Filtering Strategies

    PubMed Central

    Field, Matthew A.; Cho, Vicky

    2015-01-01

    A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality ‘genome in a bottle’ reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436

  20. Spatio-spectral color filter array design for optimal image recovery.

    PubMed

    Hirakawa, Keigo; Wolfe, Patrick J

    2008-10-01

    In digital imaging applications, data are typically obtained via a spatial subsampling procedure implemented as a color filter array-a physical construction whereby only a single color value is measured at each pixel location. Owing to the growing ubiquity of color imaging and display devices, much recent work has focused on the implications of such arrays for subsequent digital processing, including in particular the canonical demosaicking task of reconstructing a full color image from spatially subsampled and incomplete color data acquired under a particular choice of array pattern. In contrast to the majority of the demosaicking literature, we consider here the problem of color filter array design and its implications for spatial reconstruction quality. We pose this problem formally as one of simultaneously maximizing the spectral radii of luminance and chrominance channels subject to perfect reconstruction, and-after proving sub-optimality of a wide class of existing array patterns-provide a constructive method for its solution that yields robust, new panchromatic designs implementable as subtractive colors. Empirical evaluations on multiple color image test sets support our theoretical results, and indicate the potential of these patterns to increase spatial resolution for fixed sensor size, and to contribute to improved reconstruction fidelity as well as significantly reduced hardware complexity. PMID:18784035

  1. Spectral filtering optimization of a measuring channel of an x-ray broadband spectrometer

    NASA Astrophysics Data System (ADS)

    Emprin, B.; Troussel, Ph.; Villette, B.; Delmotte, F.

    2013-05-01

    A new channel of an X-ray broadband spectrometer has been developed for the 2 - 4 keV spectral range. It uses a spectral filtering by using a non-periodic multilayer mirror. This channel is composed by a filter, an aperiodic multilayer mirror and a detector. The design and realization of the optical coating mirror has been defined such as the reflectivity is above 8% in almost the entire bandwidth range 2 - 4 keV and lower than 2% outside. The mirror is optimized for working at 1.9° grazing incidence. The mirror is coated with a stack of 115 chromium / scandium (Cr / Sc) non-periodic layers, between 0.6 nm and 7.3 nm and a 3 nm thick top SiO2 layer to protect the stack from oxidization. To control thin thicknesses, we produced specific multilayer mirrors which consist on a superposition of two periodic Cr / Sc multilayers with the layer to calibrate in between. The mirror and subnanometric layers characterizations were made at the "Laboratoire Charles Fabry" (LCF) with a grazing incidence reflectometer working at 8.048 keV (Cu Kα radiation) and at the synchrotron radiation facility SOLEIL on the hard X-ray branch of the "Metrology" beamline. The reflectivity of the mirrors as a function of the photon energy was obtained in the Physikalisch Technische Bundesanstalt (PTB) laboratory at the synchrotron radiation facility Bessy II.

  2. Application of digital tomosynthesis (DTS) of optimal deblurring filters for dental X-ray imaging

    NASA Astrophysics Data System (ADS)

    Oh, J. E.; Cho, H. S.; Kim, D. S.; Choi, S. I.; Je, U. K.

    2012-04-01

    Digital tomosynthesis (DTS) is a limited-angle tomographic technique that provides some of the tomographic benefits of computed tomography (CT) but at reduced dose and cost. Thus, the potential for application of DTS to dental X-ray imaging seems promising. As a continuation of our dental radiography R&D, we developed an effective DTS reconstruction algorithm and implemented it in conjunction with a commercial dental CT system for potential use in dental implant placement. The reconstruction algorithm employed a backprojection filtering (BPF) method based upon optimal deblurring filters to suppress effectively both the blur artifacts originating from the out-focus planes and the high-frequency noise. To verify the usefulness of the reconstruction algorithm, we performed systematic simulation works and evaluated the image characteristics. We also performed experimental works in which DTS images of enhanced anatomical resolution were successfully obtained by using the algorithm and were promising to our ongoing applications to dental X-ray imaging. In this paper, our approach to the development of the DTS reconstruction algorithm and the results are described in detail.

  3. An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter

    NASA Astrophysics Data System (ADS)

    Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning

    2015-08-01

    An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/??.

  4. Optimal design of bandpass filters to reduce emission from photovoltaic cells under monochromatic illumination

    NASA Astrophysics Data System (ADS)

    Takeda, Yasuhiko; Iizuka, Hideo; Ito, Tadashi; Mizuno, Shintaro; Hasegawa, Kazuo; Ichikawa, Tadashi; Ito, Hiroshi; Kajino, Tsutomu; Higuchi, Kazuo; Ichiki, Akihisa; Motohiro, Tomoyoshi

    2015-08-01

    We have theoretically investigated photovoltaic cells used under the illumination condition of monochromatic light incident from a particular direction, which is very different from that for solar cells under natural sunlight, using detailed balance modeling. A multilayer bandpass filter formed on the surface of the cell has been found to trap the light generated by radiative recombination inside the cell, reduce emission from the cell, and consequently improve conversion efficiency. The light trapping mechanism is interpreted in terms of a one-dimensional photonic crystal, and the design guide to optimize the multilayer structure has been clarified. For obliquely incident illumination, as well as normal incidence, a significant light trapping effect has been achieved, although the emission patterns are extremely different from each other depending on the incident directions.

  5. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: Validation on phantoms

    SciTech Connect

    Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen

    2009-01-15

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  6. An Explicit Linear Filtering Solution for the Optimization of Guidance Systems with Statistical Inputs

    NASA Technical Reports Server (NTRS)

    Stewart, Elwood C.

    1961-01-01

    The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.

  7. Identification of CpG islands in DNA sequences using statistically optimal null filters

    PubMed Central

    2012-01-01

    CpG dinucleotide clusters also referred to as CpG islands (CGIs) are usually located in the promoter regions of genes in a deoxyribonucleic acid (DNA) sequence. CGIs play a crucial role in gene expression and cell differentiation, as such, they are normally used as gene markers. The earlier CGI identification methods used the rich CpG dinucleotide content in CGIs, as a characteristic measure to identify the locations of CGIs. The fact, that the probability of nucleotide G following nucleotide C in a CGI is greater as compared to a non-CGI, is employed by some of the recent methods. These methods use the difference in transition probabilities between subsequent nucleotides to distinguish between a CGI from a non-CGI. These transition probabilities vary with the data being analyzed and several of them have been reported in the literature sometimes leading to contradictory results. In this article, we propose a new and efficient scheme for identification of CGIs using statistically optimal null filters. We formulate a new CGI identification characteristic to reliably and efficiently identify CGIs in a given DNA sequence which is devoid of any ambiguities. Our proposed scheme combines maximum signal-to-noise ratio and least squares optimization criteria to estimate the CGI identification characteristic in the DNA sequence. The proposed scheme is tested on a number of DNA sequences taken from human chromosomes 21 and 22, and proved to be highly reliable as well as efficient in identifying the CGIs. PMID:22931396

  8. Selecting the optimal anti-aliasing filter for multichannel biosignal acquisition intended for inter-signal phase shift analysis.

    PubMed

    Keresnyei, Rbert; Megyeri, Pter; Zidarics, Zoltn; Hejjel, Lszl

    2015-01-01

    The availability of microcomputer-based portable devices facilitates the high-volume multichannel biosignal acquisition and the analysis of their instantaneous oscillations and inter-signal temporal correlations. These new, non-invasively obtained parameters can have considerable prognostic or diagnostic roles. The present study investigates the inherent signal delay of the obligatory anti-aliasing filters. One cycle of each of the 8 electrocardiogram (ECG) and 4 photoplethysmogram signals from healthy volunteers or artificially synthesised series were passed through 100-80-60-40-20?Hz 2-4-6-8th order Bessel and Butterworth filters digitally synthesized by bilinear transformation, that resulted in a negligible error in signal delay compared to the mathematical model of the impulse- and step responses of the filters. The investigated filters have as diverse a signal delay as 2-46?ms depending on the filter parameters and the signal slew rate, which is difficult to predict in biological systems and thus difficult to compensate for. Its magnitude can be comparable to the examined phase shifts, deteriorating the accuracy of the measurement. As a conclusion, identical or very similar anti-aliasing filters with lower orders and higher corner frequencies, oversampling, and digital low pass filtering are recommended for biosignal acquisition intended for inter-signal phase shift analysis. PMID:25514627

  9. Geometric optimization of a step bearing for a hydrodynamically levitated centrifugal blood pump for the reduction of hemolysis.

    PubMed

    Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2013-09-01

    A hydrodynamically levitated centrifugal blood pump with a semi-open impeller has been developed for mechanical circulatory assistance. However, a narrow bearing gap has the potential to cause hemolysis. The purpose of the present study is to optimize the geometric configuration of the hydrodynamic step bearing in order to reduce hemolysis by expansion of the bearing gap. First, a numerical analysis of the step bearing, based on lubrication theory, was performed to determine the optimal design. Second, in order to assess the accuracy of the numerical analysis, the hydrodynamic forces calculated in the numerical analysis were compared with those obtained in an actual measurement test using impellers having step lengths of 0%, 33%, and 67% of the vane length. Finally, a bearing gap measurement test and a hemolysis test were performed. As a result, the numerical analysis revealed that the hydrodynamic force was the largest when the step length was approximately 70%. The hydrodynamic force calculated in the numerical analysis was approximately equivalent to that obtained in the measurement test. In the measurement test and the hemolysis test, the blood pump having a step length of 67% achieved the maximum bearing gap and reduced hemolysis, as compared with the pumps having step lengths of 0% and 33%. It was confirmed that the numerical analysis of the step bearing was effective, and the developed blood pump having a step length of approximately 70% was found to be a suitable configuration for the reduction of hemolysis. PMID:23834855

  10. Design and evaluation of three-level composite filters obtained by optimizing a compromise average performance measure

    NASA Astrophysics Data System (ADS)

    Hendrix, Charles D.; Vijaya Kumar, B. V. K.

    1994-06-01

    Correlation filters with three transmittance levels (+1, 0, and -1) are of interest in optical pattern recognition because they can be implemented on available spatial light modulators and because the zero level allows us to include a region of support (ROS). The ROS can provide additional control over the filter's noise tolerance and peak sharpness. A new algorithm based on optimizing a compromise average performance measure (CAPM) is proposed for designing three-level composite filters. The performance of this algorithm is compared to other three-level composite filter designs using a common image database and using figures of merit such as the Fisher ratio, error rate, and light efficiency. It is shown that the CAPM algorithm yields better results.

  11. Optimization of a Multi-Step Procedure for Isolation of Chicken Bone Collagen

    PubMed Central

    2015-01-01

    Chicken bone is not adequately utilized despite its high nutritional value and protein content. Although not a common raw material, chicken bone can be used in many different ways besides manufacturing of collagen products. In this study, a multi-step procedure was optimized to isolate chicken bone collagen for higher yield and quality for manufacture of collagen products. The chemical composition of chicken bone was 2.9% nitrogen corresponding to about 15.6% protein, 9.5% fat, 14.7% mineral and 57.5% moisture. The lowest amount of protein loss was aimed along with the separation of the highest amount of visible impurities, non-collagen proteins, minerals and fats. Treatments under optimum conditions removed 57.1% of fats and 87.5% of minerals with respect to their initial concentrations. Meanwhile, 18.6% of protein and 14.9% of hydroxyproline were lost, suggesting that a selective separation of non-collagen components and isolation of collagen were achieved. A significant part of impurities were selectively removed and over 80% of the original collagen was preserved during the treatments. PMID:26761863

  12. Optimization of leaf margins for lung stereotactic body radiotherapy using a flattening filter-free beam

    SciTech Connect

    Wakai, Nobuhide; Sumida, Iori; Otani, Yuki; Suzuki, Osamu; Seo, Yuji; Isohashi, Fumiaki; Yoshioka, Yasuo; Ogawa, Kazuhiko; Hasegawa, Masatoshi

    2015-05-15

    Purpose: The authors sought to determine the optimal collimator leaf margins which minimize normal tissue dose while achieving high conformity and to evaluate differences between the use of a flattening filter-free (FFF) beam and a flattening-filtered (FF) beam. Methods: Sixteen lung cancer patients scheduled for stereotactic body radiotherapy underwent treatment planning for a 7 MV FFF and a 6 MV FF beams to the planning target volume (PTV) with a range of leaf margins (−3 to 3 mm). Forty grays per four fractions were prescribed as a PTV D95. For PTV, the heterogeneity index (HI), conformity index, modified gradient index (GI), defined as the 50% isodose volume divided by target volume, maximum dose (Dmax), and mean dose (Dmean) were calculated. Mean lung dose (MLD), V20 Gy, and V5 Gy for the lung (defined as the volumes of lung receiving at least 20 and 5 Gy), mean heart dose, and Dmax to the spinal cord were measured as doses to organs at risk (OARs). Paired t-tests were used for statistical analysis. Results: HI was inversely related to changes in leaf margin. Conformity index and modified GI initially decreased as leaf margin width increased. After reaching a minimum, the two values then increased as leaf margin increased (“V” shape). The optimal leaf margins for conformity index and modified GI were −1.1 ± 0.3 mm (mean ± 1 SD) and −0.2 ± 0.9 mm, respectively, for 7 MV FFF compared to −1.0 ± 0.4 and −0.3 ± 0.9 mm, respectively, for 6 MV FF. Dmax and Dmean for 7 MV FFF were higher than those for 6 MV FF by 3.6% and 1.7%, respectively. There was a positive correlation between the ratios of HI, Dmax, and Dmean for 7 MV FFF to those for 6 MV FF and PTV size (R = 0.767, 0.809, and 0.643, respectively). The differences in MLD, V20 Gy, and V5 Gy for lung between FFF and FF beams were negligible. The optimal leaf margins for MLD, V20 Gy, and V5 Gy for lung were −0.9 ± 0.6, −1.1 ± 0.8, and −2.1 ± 1.2 mm, respectively, for 7 MV FFF compared to −0.9 ± 0.6, −1.1 ± 0.8, and −2.2 ± 1.3 mm, respectively, for 6 MV FF. With the heart inside the radiation field, the mean heart dose showed a V-shaped relationship with leaf margins. The optimal leaf margins were −1.0 ± 0.6 mm for both beams. Dmax to the spinal cord showed no clear trend for changes in leaf margin. Conclusions: The differences in doses to OARs between FFF and FF beams were negligible. Conformity index, modified GI, MLD, lung V20 Gy, lung V5 Gy, and mean heart dose showed a V-shaped relationship with leaf margins. There were no significant differences in optimal leaf margins to minimize these parameters between both FFF and FF beams. The authors’ results suggest that a leaf margin of −1 mm achieves high conformity and minimizes doses to OARs for both FFF and FF beams.

  13. Optimization of medium components for increased production of C-phycocyanin from Phormidium ceylanicum and its purification by single step process.

    PubMed

    Singh, Niraj Kumar; Parmar, Asha; Madamwar, Datta

    2009-02-01

    Phycocyanin is a major protein produced by cyanobacteria, but very few phycocyanin-producing strains have been reported. In the present study, response surface methodology (RSM) involving a central composite design for four factors was successfully employed to optimize medium components for increased production of phycocyanin from Phormidium ceylanicum. The production of phycocyanin and interactions between sodium nitrate, calcium chloride, trace metal mix and citric acid stock were investigated and modeled. Under optimized condition P. ceylanicum was able to give 2.3-fold increase in phycocyanin production in comparison to commonly used BG 11 medium in 32 days. We have demonstrated the extraction, purification and characterization of C-phycocyanin using novel method based on filtration and single step chromatography. The protein was extracted by repeated freeze-thaw cycles and the crude extract was filtered and concentrated in stirred ultrafiltration cell (UFC). The UFC concentrate was then subjected to a single ion exchange chromatographic step. A purity ratio of 4.15 was achieved from a starting value of 1.05. The recovery efficiency of C-phycocyanin from crude extract was 63.50%. The purity was checked by electrophoresis and UV-Vis spectroscopy. PMID:18954974

  14. Multisource modeling of flattening filter free (FFF) beam and the optimization of model parameters

    SciTech Connect

    Cho, Woong; Kielar, Kayla N.; Mok, Ed; Xing Lei; Park, Jeong-Hoon; Jung, Won-Gyun; Suh, Tae-Suk

    2011-04-15

    Purpose: With the introduction of flattening filter free (FFF) linear accelerators to radiation oncology, new analytical source models for a FFF beam applicable to current treatment planning systems is needed. In this work, a multisource model for the FFF beam and the optimization of involved model parameters were designed. Methods: The model is based on a previous three source model proposed by Yang et al. [''A three-source model for the calculation of head scatter factors,'' Med. Phys. 29, 2024-2033 (2002)]. An off axis ratio (OAR) of photon fluence was introduced to the primary source term to generate cone shaped profiles. The parameters of the source model were determined from measured head scatter factors using a line search optimization technique. The OAR of the photon fluence was determined from a measured dose profile of a 40x40 cm{sup 2} field size with the same optimization technique, but a new method to acquire gradient terms for OARs was developed to enhance the speed of the optimization process. The improved model was validated with measured dose profiles from 3x3 to 40x40 cm{sup 2} field sizes at 6 and 10 MV from a TrueBeam STx linear accelerator. Furthermore, planar dose distributions for clinically used radiation fields were also calculated and compared to measurements using a 2D array detector using the gamma index method. Results: All dose values for the calculated profiles agreed with the measured dose profiles within 0.5% at 6 and 10 MV beams, except for some low dose regions for larger field sizes. A slight overestimation was seen in the lower penumbra region near the field edge for the large field sizes by 1%-4%. The planar dose calculations showed comparable passing rates (>98%) when the criterion of the gamma index method was selected to be 3%/3 mm. Conclusions: The developed source model showed good agreements between measured and calculated dose distributions. The model is easily applicable to any other linear accelerator using FFF beams as the required data include only the measured PDD, dose profiles, and output factors for various field sizes, which are easily acquired during conventional beam commissioning process.

  15. Optimal ensemble size of ensemble Kalman filter in sequential soil moisture data assimilation

    NASA Astrophysics Data System (ADS)

    Yin, Jifu; Zhan, Xiwu; Zheng, Youfei; Hain, Christopher R.; Liu, Jicheng; Fang, Li

    2015-08-01

    The ensemble Kalman filter (EnKF) has been extensively applied in sequential soil moisture data assimilation to improve the land surface model performance and in turn weather forecast capability. Usually, the ensemble size of EnKF is determined with limited sensitivity experiments. Thus, the optimal ensemble size may have never been reached. In this work, based on a series of mathematical derivations, we demonstrate that the maximum efficiency of the EnKF for assimilating observations into the models could be reached when the ensemble size is set to 12. Simulation experiments are designed in this study under ensemble size cases 2, 5, 12, 30, 50, 100, and 300 to support the mathematical derivations. All the simulations are conducted from 1 June to 30 September 2012 over southeast USA (from -90W, 30N to -80W, 40N) at 25 km resolution. We found that the simulations are perfectly consistent with the mathematical derivation. This optical ensemble size may have theoretical implications on the implementation of EnKF in other sequential data assimilation problems.

  16. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  17. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  18. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  19. Autostereoscopic display with 60 ray directions using LCD with optimized color filter layout

    NASA Astrophysics Data System (ADS)

    Koike, Takafumi; Oikawa, Michio; Utsugi, Kei; Kobayashi, Miho; Yamasaki, Masami

    2007-02-01

    We developed a mobile-size integral videography (IV) display that reproduces 60 ray directions. IV is an autostereoscopic video image technique based on integral photography (IP). The IV display consists of a 2-D display and a microlens array. The maximal spatial frequency (MSF) and the number of rays appear to be the most important factors in producing realistic autostereoscopic images. Lens pitch usually determines the MSF of IV displays. The lens pitch and pixel density of the 2-D display determine the number of rays it reproduces. There is a trade-off between the lens pitch and the pixel density. The shape of an elemental image determines the shape of the area of view. We developed an IV display based on the above correlationship. The IV display consists of a 5-inch 900-dpi liquid crystal display (LCD) and a microlens array. The IV display has 60 ray directions with 4 vertical rays and a maximum of 18 horizontal rays. We optimized the color filter on the LCD to reproduce 60 rays. The resolution of the display is 256x192, and the viewing angle is 30 degrees. These parameters are sufficient for mobile game use. Users can interact with the IV display by using a control pad.

  20. Optimization of synthesis and peptization steps to obtain iron oxide nanoparticles with high energy dissipation rates

    NASA Astrophysics Data System (ADS)

    Mérida, Fernando; Chiu-Lam, Andreina; Bohórquez, Ana C.; Maldonado-Camargo, Lorena; Pérez, María-Eglée; Pericchi, Luis; Torres-Lugo, Madeline; Rinaldi, Carlos

    2015-11-01

    Magnetic Fluid Hyperthermia (MFH) uses heat generated by magnetic nanoparticles exposed to alternating magnetic fields to cause a temperature increase in tumors to the hyperthermia range (43-47 °C), inducing apoptotic cancer cell death. As with all cancer nanomedicines, one of the most significant challenges with MFH is achieving high nanoparticle accumulation at the tumor site. This motivates development of synthesis strategies that maximize the rate of energy dissipation of iron oxide magnetic nanoparticles, preferable due to their intrinsic biocompatibility. This has led to development of synthesis strategies that, although attractive from the point of view of chemical elegance, may not be suitable for scale-up to quantities necessary for clinical use. On the other hand, to date the aqueous co-precipitation synthesis, which readily yields gram quantities of nanoparticles, has only been reported to yield sufficiently high specific absorption rates after laborious size selective fractionation. This work focuses on improvements to the aqueous co-precipitation of iron oxide nanoparticles to increase the specific absorption rate (SAR), by optimizing synthesis conditions and the subsequent peptization step. Heating efficiencies up to 1048 W/gFe (36.5 kA/m, 341 kHz; ILP=2.3 nH m2 kg-1) were obtained, which represent one of the highest values reported for iron oxide particles synthesized by co-precipitation without size-selective fractionation. Furthermore, particles reached SAR values of up to 719 W/gFe (36.5 kA/m, 341 kHz; ILP=1.6 nH m2 kg-1) when in a solid matrix, demonstrating they were capable of significant rates of energy dissipation even when restricted from physical rotation. Reduction in energy dissipation rate due to immobilization has been identified as an obstacle to clinical translation of MFH. Hence, particles obtained with the conditions reported here have great potential for application in nanoscale thermal cancer therapy.

  1. Optimization by decomposition: A step from hierarchic to non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    A new, non-hierarchic decomposition is formulated for system optimization that uses system analysis, system sensitivity analysis, temporary decoupled optimizations performed in the design subspaces corresponding to the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems. The approach amounts to a variation of the well-known method of subspace optimization modified so that the analysis of the entire system is eliminated from the subspace optimization and the subspace optimizations may be performed concurrently.

  2. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    NASA Astrophysics Data System (ADS)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  3. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  4. The Touro 12-Step: A Systematic Guide to Optimizing Survey Research with Online Discussion Boards

    PubMed Central

    Ip, Eric J; Tenerowicz, Michael J; Perry, Paul J

    2010-01-01

    The Internet, in particular discussion boards, can provide a unique opportunity for recruiting participants in online research surveys. Despite its outreach potential, there are significant barriers which can limit its success. Trust, participation, and visibility issues can all hinder the recruitment process; the Touro 12-Step was developed to address these potential hurdles. By following this step-by-step approach, researchers will be able to minimize these pitfalls and maximize their recruitment potential via online discussion boards. PMID:20507843

  5. Improvement of hemocompatibility for hydrodynamic levitation centrifugal pump by optimizing step bearings.

    PubMed

    Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2011-01-01

    We have developed a hydrodynamic levitation centrifugal blood pump with a semi-open impeller for a mechanically circulatory assist. The impeller levitated with original hydrodynamic bearings without any complicated control and sensors. However, narrow bearing gap has the potential for causing hemolysis. The purpose of the study is to investigate the geometric configuration of the hydrodynamic step bearing to minimize hemolysis by expansion of the bearing gap. Firstly, we performed the numerical analysis of the step bearing based on Reynolds equation, and measured the actual hydrodynamic force of the step bearing. Secondly, the bearing gap measurement test and the hemolysis test were performed to the blood pumps, whose step length were 0 %, 33 % and 67 % of the vane length respectively. As a result, in the numerical analysis, the hydrodynamic force was the largest, when the step bearing was around 70 %. In the actual evaluation tests, the blood pump having step 67 % obtained the maximum bearing gap, and was able to improve the hemolysis, compared to those having step 0% and 33%. We confirmed that the numerical analysis of the step bearing worked effectively, and the blood pump having step 67 % was suitable configuration to minimize hemolysis, because it realized the largest bearing gap. PMID:22254562

  6. Globally Optimal Multisensor Distributed Random Parameter Matrices Kalman Filtering Fusion with Applications

    PubMed Central

    Luo, Yingting; Zhu, Yunmin; Luo, Dandan; Zhou, Jie; Song, Enbin; Wang, Donghua

    2008-01-01

    This paper proposes a new distributed Kalman filtering fusion with random state transition and measurement matrices, i.e., random parameter matrices Kalman filtering. It is proved that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements; therefore, it achieves the best performance. More importantly, this result can be applied to Kalman filtering with uncertain observations including the measurement with a false alarm probability as a special case, as well as, randomly variant dynamic systems with multiple models. Numerical examples are given which support our analysis and show significant performance loss of ignoring the randomness of the parameter matrices.

  7. Optimization by decomposition: A step from hierarchic to non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    A new, non-hierarchic decomposition is formulated for system optimization that uses system analysis, system sensitivity analysis, temporary decoupled optimizations performed in the design subspaces corresponding to the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems. The approach amounts to a variation of the well-known method of subspace optimization modified so that the analysis of the entire system is eliminated from the subspace optimization and the subspace optimizations may be performed concurrently.

  8. Toward an Optimal Position for IVC Filters: Computational Modeling of the Impact of Renal Vein Inflow

    SciTech Connect

    Wang, S L; Singer, M A

    2009-07-13

    The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.

  9. Optimization of SERS scattering by Ag-NPs-coated filter paper for quantification of nicotinamide in a cosmetic formulation.

    PubMed

    Sallum, Loriz Francisco; Soares, Frederico Luis Felipe; Ardila, Jorge Armando; Carneiro, Renato Lajarim

    2014-01-01

    Supported silver nanoparticles on filter paper were synthesized using Tollens' reagent. Experimental designs were performed to obtain the highest SERS enhancement factor by study of the influence of the parameters: filter paper pretreatment, type of filter paper, reactants concentration, reaction time and temperature. To this end, fractional factorial and central composite designs were used in order to optimize the synthesis for quantification of nicotinamide in the presence of excipients in a commercial sample of cosmetic. The values achieved for the optimal condition were 150 mM of ammonium hydroxide, 50 mM of silver nitrate, 500 mM of glucose, 8 min for the reaction time, 45 C temperature, pretreatment with ammonium hydroxide and quantitative filter paper (1-2 m). Despite the variation of SERS intensity, it was possible to use an adapted method of internal standard to obtain a calibration curve with good precision. The coefficient of determination of the linear fit was 0.97. The method proposed in this work was capable of quantifying nicotinamide on a commercial cosmetic gel, at low concentration levels, with a relative error of 1.06% compared to the HPLC. SERS spectroscopy presents faster analyses than HPLC, also complex sample preparation and large amount of reactants are not necessary. PMID:24274308

  10. Characterization and optimization of acoustic filter performance by experimental design methodology.

    PubMed

    Gorenflo, Volker M; Ritter, Joachim B; Aeschliman, Dana S; Drouin, Hans; Bowen, Bruce D; Piret, James M

    2005-06-20

    Acoustic cell filters operate at high separation efficiencies with minimal fouling and have provided a practical alternative for up to 200 L/d perfusion cultures. However, the operation of cell retention systems depends on several settings that should be adjusted depending on the cell concentration and perfusion rate. The impact of operating variables on the separation efficiency performance of a 10-L acoustic separator was characterized using a factorial design of experiments. For the recirculation mode of separator operation, bioreactor cell concentration, perfusion rate, power input, stop time and recirculation ratio were studied using a fractional factorial 2(5-1) design, augmented with axial and center point runs. One complete replicate of the experiment was carried out, consisting of 32 more runs, at 8 runs per day. Separation efficiency was the primary response and it was fitted by a second-order model using restricted maximum likelihood estimation. By backward elimination, the model equation for both experiments was reduced to 14 significant terms. The response surface model for the separation efficiency was tested using additional independent data to check the accuracy of its predictions, to explore robust operation ranges and to optimize separator performance. A recirculation ratio of 1.5 and a stop time of 2 s improved the separator performance over a wide range of separator operation. At power input of 5 W the broad range of robust high SE performance (95% or higher) was raised to over 8 L/d. The reproducible model testing results over a total period of 3 months illustrate both the stable separator performance and the applicability of the model developed to long-term perfusion cultures. PMID:15858795

  11. Signal-to-Noise Enhancement Techniques for Quantum Cascade Absorption Spectrometers Employing Optimal Filtering and Other Approaches

    SciTech Connect

    Disselkamp, Robert S.; Kelly, James F.; Sams, Robert L.; Anderson, Gordon A.

    2002-09-01

    Optical feedback to the laser source in tunable diode laser spectroscopy (TDLS) is known to create intensity modulation noise due to elatoning and optical feedback (i.e., multiplicative technical noise) that usually limits spectral signal-to-noise (S/N). The large technical noise often limits absorption spectroscopy to noise floors 100-fold greater than the Poisson shot noise limit due to fluctuations in the laser intensity. The high output powers generated from quantum cascade (QC) lasers, along with their high gain, makes these injection laser systems especially susceptible to technical noise. In this article we discuss a method of using optimal filtering to reduce technical noise. We have observed S/N enhancements ranging from {approx}20% to a factor of {approx}50. The degree to which optimal filtering will enhance S/N depends on the similarity between the Fourier components of the technical noise and those of the signal, with lower S/N enhancements observed for more similar Fourier decompositions of the signal and technical noise. We also examine the linearity of optimal filtered spectra for both time and intensity. This was accomplished by creating a synthetic spectrum for the species being studied (CH4, N2O, CO2, H2O in ambient air) utilizing line-positions and line-widths with an assumed Voight-profile from a previous database (HITRAN). Agreement better than 0.036% in wavenumber, and 1.64% in intensity (up to a 260-fold intensity ratio employed), was observed. Our results suggest that rapid ex post facto digital optimal filtering can be used to enhance S/N for routine trace gas detection.

  12. Signal-to-noise enhancement techniques for quantum cascade absorption spectrometers employing optimal filtering and other approaches

    NASA Astrophysics Data System (ADS)

    Disselkamp, R. S.; Kelly, J. F.; Sams, R. L.; Anderson, G. A.

    Optical feedback to the laser source in tunable diode laser spectroscopy (TDLS) is known to create intensity modulation noise due to elatoning and optical feedback (i.e. multiplicative technical noise) that usually limits spectral signal-to-noise (S/N). The large technical noise often limits absorption spectroscopy to noise floors 100-fold greater than the Poisson shot noise limit due to fluctuations in the laser intensity. The high output powers generated from quantum cascade (QC) lasers, along with their high gain, makes these injection laser systems especially susceptible to technical noise. In this article we discuss a method of using optimal filtering to reduce technical noise. We have observed S/N enhancements ranging from 20% to a factor of 50. The degree to which optimal filtering enhances S/N depends on the similarity between the Fourier components of the technical noise and those of the signal, with lower S/N enhancements observed for more similar Fourier decompositions of the signal and technical noise. We also examine the linearity of optimal filtered spectra in both time and intensity. This was accomplished by creating a synthetic spectrum for the species being studied (CH4, N2O, CO2 and H2O in ambient air) utilizing line positions and linewidths with an assumed Voigt profile from a commercial database (HITRAN). Agreement better than 0.036% in wavenumber and 1.64% in intensity (up to a 260-fold intensity ratio employed) was observed. Our results suggest that rapid ex post facto digital optimal filtering can be used to enhance S/N for routine trace gas detection.

  13. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: Spectral optimization and preliminary phantom measurement

    SciTech Connect

    Saito, Masatoshi

    2007-11-15

    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.

  14. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition. PMID:26681183

  15. Optimization of a Prism-Mirror Imaging Energy Filter for High-Resolution Microanalysis in Electron Microscopy.

    NASA Astrophysics Data System (ADS)

    Jiang, Xun-Gao

    1995-01-01

    The energy resolution of a prism-mirror-prism (PMP) imaging energy filter, used for electron energy loss microanalysis, is limited by the aperture aberrations of its magnetic prism. The aberrations can be minimized by appropriately curving the pole-faces of the prism. In this thesis a computer-aided design procedure is described for optimizing the curvatures. The procedure accurately takes into account the influence of fringing fields on the optical properties of the prism and allows a realistic performance evaluation. An optimized PMP filter with an improved resolution has been developed in this way. For example, at an incident electron energy of 80 keV and an acceptance half-angle of 10 mradian, the filter has a resolution of 1.3 eV, a factor of 18 better than that of an equivalent system with a straight-face prism. The validity of the filter design depends on the correct determination of fringing magnetic fields. To verify the theoretical field calculations, a oscillating -loop magnetometer has been built. The device has a linear spatial resolution of 0.1 mm, and is well suited for measuring rapidly decreasing fringing fields. The measured fringing field distribution is in good agreement with the theoretical calculations within a maximum discrepancy of +/- 1% B_0, with B_0 being the uniform flux density inside the prism. The new PMP filter has been constructed and installed on a Siemens EM-102 microscope in our laboratory. Under the experimental conditions of an operating voltage of 60 kV and an acceptance half-angle of 8.5 mradian, the resolution of the filter is 0.5 eV, defined as the measured full-width-at-half-maximum of the intensity distribution of the aberration figure on the energy selecting plane. The much improved energy resolution of the optimized PMP imaging filter has made it possible to explore an exciting area of electron energy loss microanalysis, the detection and localization of molecular compounds by their characteristic excitations. A preliminary study, using embedded hematin (a chromophore) crystals as test specimens, has clearly demonstrated the feasibility of this technique in the presence of beam-induced radiation damage.

  16. Optimal filter design for shielded and unshielded ambient noise reduction in fetal magnetocardiography.

    PubMed

    Comani, S; Mantini, D; Alleva, G; Di Luzio, S; Romani, G L

    2005-12-01

    The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz. PMID:16306648

  17. Optimal filter design for shielded and unshielded ambient noise reduction in fetal magnetocardiography

    NASA Astrophysics Data System (ADS)

    Comani, S.; Mantini, D.; Alleva, G.; Di Luzio, S.; Romani, G. L.

    2005-12-01

    The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz.

  18. Near-Diffraction-Limited Operation of Step-Index Large-Mode-Area Fiber Lasers Via Gain Filtering

    SciTech Connect

    Marciante, J.R.; Roides, R.G.; Shkunov, V.V.; Rockwell, D.A.

    2010-06-04

    We present, for the first time to our knowledge, an explicit experimental comparison of beam quality in conventional and confined-gain multimode fiber lasers. In the conventional fiber laser, beam quality degrades with increasing output power. In the confined-gain fiber laser, the beam quality is good and does not degrade with output power. Gain filtering of higher-order modes in 28 ?m diameter core fiber lasers is demonstrated with a beam quality of M^2 = 1.3 at all pumping levels. Theoretical modeling is shown to agree well with experimentally observed trends.

  19. Identifying the Preferred Subset of Enzymatic Profiles in Nonlinear Kinetic Metabolic Models via Multiobjective Global Optimization and Pareto Filters

    PubMed Central

    Pozo, Carlos; Guilln-Goslbez, Gonzalo; Sorribas, Albert; Jimnez, Laureano

    2012-01-01

    Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae. PMID:23028457

  20. Optimal features selection based on circular Gabor filters and RSE in texture segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Qiong; Liu, Jian; Tian, Jinwen

    2007-12-01

    This paper designs the circular Gabor filters incorporating into the human visual characteristics, and the concept of mutual information entropy in rough set is introduced to evaluate the effect of the features extracted from different filters on clustering, redundant features are got rid of, Experimental results indicate that the proposed algorithm outperforms conventional approaches in terms of both objective measurements and visual evaluation in texture segmentation.

  1. Optimality of the Holm procedure among general step-down multiple testing procedures

    PubMed Central

    Salzman, Peter

    2008-01-01

    We study the class of general step-down multiple testing procedures, which contains the usually considered procedures determined by a nondecreasing sequence of thresholds (we call them threshold step-down, or TSD, procedures) as a parametric subclass. We show that all procedures in this class satisfying the natural condition of monotonicity and controlling the family-wise error rate (FWER) at a prescribed level are dominated by one of them – the classical Holm procedure. This generalizes an earlier result pertaining to the subclass of TSD procedures (Lehmann and Romano, Testing Statistical Hypotheses, 3rd ed., 2005). We also derive a relation between the levels at which a monotone step-down procedure controls the FWER and the generalized FWER (the probability of k or more false rejections). PMID:19759804

  2. Optimization and field application of a filter pack system for the simultaneous sampling of atmospheric HN0 3, NH 3 AND SO 2

    NASA Astrophysics Data System (ADS)

    Karaka?, Duran; Tuncel, Semra G.

    Optimization and field application of a filter pack system for the simultaneous collection of atmospheric gas-phase HN0 3, NH 3 and S0 2 have been studied. A Teflon prefilter was used to remove particulate matter. Nylon filter, oxalic-acid-treated Whatman 41 filter and sodium-carbonate-treated Whatman 41 filter were used for the collection of HN0 3, NH 3 and S0 2, respectively. For the collection of gas-phase HN0 3, nylon filters had better efficiency and capacity as compared to NaCl-impregnated Whatman 41 filters for long sampling periods of more than 30 h. All treated filters and nylon filters worked with the collection efficiencies of greater than 95%. About 2% of the gas-phase ammonia has been retained by the nylon filters during simultaneous collection experiments done in the laboratory but the retained ammonia on the nylon filter sometimes reached to about 25% of the gaseous total ammonia collected on the oxalic-acid-impregnated filter in the field experiments. Other than ammonia no significant retention or volatilization from the filter pack system was observed during the simultaneous experiments carried out in an urban atmosphere.

  3. Single-channel noise reduction using unified joint diagonalization and optimal filtering

    NASA Astrophysics Data System (ADS)

    Nrholm, Sidsel Marie; Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Grsbll

    2014-12-01

    In this paper, the important problem of single-channel noise reduction is treated from a new perspective. The problem is posed as a filtering problem based on joint diagonalization of the covariance matrices of the desired and noise signals. More specifically, the eigenvectors from the joint diagonalization corresponding to the least significant eigenvalues are used to form a filter, which effectively estimates the noise when applied to the observed signal. This estimate is then subtracted from the observed signal to form an estimate of the desired signal, i.e., the speech signal. In doing this, we consider two cases, where, respectively, no distortion and distortion are incurred on the desired signal. The former can be achieved when the covariance matrix of the desired signal is rank deficient, which is the case, for example, for voiced speech. In the latter case, the covariance matrix of the desired signal is full rank, as is the case, for example, in unvoiced speech. Here, the amount of distortion incurred is controlled via a simple, integer parameter, and the more distortion allowed, the higher the output signal-to-noise ratio (SNR). Simulations demonstrate the properties of the two solutions. In the distortionless case, the proposed filter achieves only a slightly worse output SNR, compared to the Wiener filter, along with no signal distortion. Moreover, when distortion is allowed, it is possible to achieve higher output SNRs compared to the Wiener filter. Alternatively, when a lower output SNR is accepted, a filter with less signal distortion than the Wiener filter can be constructed.

  4. Numerical experiment optimization to obtain the characteristics of the centrifugal pump steps package

    NASA Astrophysics Data System (ADS)

    Boldyrev, S. V.; Boldyrev, A. V.

    2014-12-01

    The numerical simulation method of turbulent flow in a running space of the working-stage in a centrifugal pump using the periodicity conditions has been formulated. The proposed method allows calculating the characteristic indices of one pump step at a lower computing resources cost. The comparison of the pump characteristics' calculation results with pilot data has been conducted.

  5. Optimizing planar lipid bilayer single-channel recordings for high resolution with rapid voltage steps.

    PubMed

    Wonderlin, W F; Finkel, A; French, R J

    1990-08-01

    We describe two enhancements of the planar bilayer recording method which enable low-noise recordings of single-channel currents activated by voltage steps in planar bilayers formed on apertures in partitions separating two open chambers. First, we have refined a simple and effective procedure for making small bilayer apertures (25-80 micrograms diam) in plastic cups. These apertures combine the favorable properties of very thin edges, good mechanical strength, and low stray capacitance. In addition to enabling formation of small, low-capacitance bilayers, this aperture design also minimizes the access resistance to the bilayer, thereby improving the low-noise performance. Second, we have used a patch-clamp headstage modified to provide logic-controlled switching between a high-gain (50 G omega) feedback resistor for high-resolution recording and a low-gain (50 M omega) feedback resistor for rapid charging of the bilayer capacitance. The gain is switched from high to low before a voltage step and then back to high gain 25 microseconds after the step. With digital subtraction of the residual currents produced by the gain switching and electrostrictive changes in bilayer capacitance, we can achieve a steady current baseline within 1 ms after the voltage step. These enhancements broaden the range of experimental applications for the planar bilayer method by combining the high resolution previously attained only with small bilayers formed on pipette tips with the flexibility of experimental design possible with planar bilayers in open chambers. We illustrate application of these methods with recordings of the voltage-step activation of a voltage-gated potassium channel. PMID:1698470

  6. Generalized optimal spatial filtering using a kernel approach with application to EEG classification

    PubMed Central

    Rutkowski, Tomasz M.; Zhang, Liqing; Cichocki, Andrzej

    2010-01-01

    Common spatial patterns (CSP) has been widely used for finding the linear spatial filters which are able to extract the discriminative brain activities between two different mental tasks. However, the CSP is difficult to capture the nonlinearly clustered structure from the non-stationary EEG signals. To relax the presumption of strictly linear patterns in the CSP, in this paper, a generalized CSP (GCSP) based on generalized singular value decomposition (GSVD) and kernel method is proposed. Our method is able to find the nonlinear spatial filters which are formulated in the feature space defined by a nonlinear mapping through kernel functions. Furthermore, in order to overcome the overfitting problem, the regularized GCSP is developed by adding the regularized parameters. The experimental results demonstrate that our method is an effective nonlinear spatial filtering method. PMID:22132044

  7. SU-E-I-57: Evaluation and Optimization of Effective-Dose Using Different Beam-Hardening Filters in Clinical Pediatric Shunt CT Protocol

    SciTech Connect

    Gill, K; Aldoohan, S; Collier, J

    2014-06-01

    Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measure CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.

  8. A multiobjective optimization approach for combating Aedes aegypti using chemical and biological alternated step-size control.

    PubMed

    Dias, Weverton O; Wanner, Elizabeth F; Cardoso, Rodrigo T N

    2015-11-01

    Dengue epidemics, one of the most important viral disease worldwide, can be prevented by combating the transmission vector Aedes aegypti. In support of this aim, this article proposes to analyze the Dengue vector control problem in a multiobjective optimization approach, in which the intention is to minimize both social and economic costs, using a dynamic mathematical model representing the mosquitoes' population. It consists in finding optimal alternated step-size control policies combining chemical (via application of insecticides) and biological control (via insertion of sterile males produced by irradiation). All the optimal policies consists in apply insecticides just at the beginning of the season and, then, keep the mosquitoes in an acceptable level spreading into environment a few amount of sterile males. The optimization model analysis is driven by the use of genetic algorithms. Finally, it performs a statistic test showing that the multiobjective approach is effective in achieving the same effect of variations in the cost parameters. Then, using the proposed methodology, it is possible to find, in a single run, given a decision maker, the optimal number of days and the respective amounts in which each control strategy must be applied, according to the tradeoff between using more insecticide with less transmission mosquitoes or more sterile males with more transmission mosquitoes. PMID:26362231

  9. Optimization of 3D laser scanning speed by use of combined variable step

    NASA Astrophysics Data System (ADS)

    Garcia-Cruz, X. M.; Sergiyenko, O. Yu.; Tyrsa, Vera; Rivas-Lopez, M.; Hernandez-Balbuena, D.; Rodriguez-Quionez, J. C.; Basaca-Preciado, L. C.; Mercorelli, P.

    2014-03-01

    The problem of 3D TVS slow functioning caused by constant small scanning step becomes its solution in the presented research. It can be achieved by combined scanning step application for the fast search of n obstacles in unknown surroundings. Such a problem is of keynote importance in automatic robot navigation. To maintain a reasonable speed robots must detect dangerous obstacles as soon as possible, but all known scanners able to measure distances with sufficient accuracy are unable to do it in real time. So, the related technical task of the scanning with variable speed and precise digital mapping only for selected spatial sectors is under consideration. A wide range of simulations in MATLAB 7.12.0 of several variants of hypothetic scenes with variable n obstacles in each scene (including variation of shapes and sizes) and scanning with incremented angle value (0.6 up to 15) is provided. The aim of such simulation was to detect which angular values of interval still permit getting the maximal information about obstacles without undesired time losses. Three of such local maximums were obtained in simulations and then rectified by application of neuronal network formalism (Levenberg-Marquradt Algorithm). The obtained results in its turn were applied to MET (Micro-Electro-mechanical Transmission) design for practical realization of variable combined step scanning on an experimental prototype of our previously known laser scanner.

  10. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.

  11. Optimization of a femtosecond Ti : sapphire amplifier using a acouto-optic programmable dispersive filter and a genetic algorithm.

    SciTech Connect

    Korovyanko, O. J.; Rey-de-Castro, R.; Elles, C. G.; Crowell, R. A.; Li, Y.

    2006-01-01

    The temporal output of a Ti:Sapphire laser system has been optimized using an acousto-optic programmable dispersive filter and a genetic algorithm. In-situ recording the evolution of spectral phase, amplitude and temporal pulse profile for each iteration of the algorithm using SPIDER shows that we are able to lock the spectral phase of the laser pulse within a narrow margin. By using the second harmonic of the CPA laser as feedback for the genetic algorithm, it has been demonstrated that severe mismatch between the compressor and stretcher can be compensated for in a short period of time.

  12. Optimal optical filters of fluorescence excitation and emission for poultry fecal detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Purpose: An analytic method to design excitation and emission filters of a multispectral fluorescence imaging system is proposed and was demonstrated in an application to poultry fecal inspection. Methods: A mathematical model of a multispectral imaging system is proposed and its system parameters, ...

  13. Optimization of plasma parameters with magnetic filter field and pressure to maximize H- ion density in a negative hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young; Chung, Kyoung-Jae; Hwang, Y. S.

    2016-02-01

    Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuir probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H- populations for various filter field strengths and pressures. Enhanced H- population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H- sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region.

  14. Layout Optimization Method for Magnetic Circuit using Multi-step Utilization of Genetic Algorithm Combined with Design Space Reduction

    NASA Astrophysics Data System (ADS)

    Okamoto, Yoshifumi; Tominaga, Yusuke; Sato, Shuji

    The layout optimization with the ON-OFF information of magnetic material in finite elements is one of the most attractive tools in initial conceptual and practical design of electrical machinery for engineers. The heuristic algorithms based on the random search allow the engineers to define the general-purpose objects, however, there are many iterations of finite element analysis, and it is difficult to realize the practical solution without island and void distribution by using direct search method, for example, simulated annealing (SA), genetic algorithm (GA), and so on. This paper presents the layout optimization method based on GA. Proposed method can arrive at the practical solution by means of multi-step utilization of GA, and the convergence speed is considerably improved by using the combination with the reduction process of design space.

  15. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    PubMed

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  16. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    PubMed Central

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  17. Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-dimensional Bilateral Filter

    SciTech Connect

    Bethel, E. Wes; Bethel, E. Wes

    2012-01-06

    This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.

  18. Determining the optimal system-specific cut-off frequencies for filtering in-vitro upper extremity impact force and acceleration data by residual analysis.

    PubMed

    Burkhart, Timothy A; Dunning, Cynthia E; Andrews, David M

    2011-10-13

    The fundamental nature of impact testing requires a cautious approach to signal processing, to minimize noise while preserving important signal information. However, few recommendations exist regarding the most suitable filter frequency cut-offs to achieve these goals. Therefore, the purpose of this investigation is twofold: to illustrate how residual analysis can be utilized to quantify optimal system-specific filter cut-off frequencies for force, moment, and acceleration data resulting from in-vitro upper extremity impacts, and to show how optimal cut-off frequencies can vary based on impact condition intensity. Eight human cadaver radii specimens were impacted with a pneumatic impact testing device at impact energies that increased from 20J, in 10J increments, until fracture occurred. The optimal filter cut-off frequency for pre-fracture and fracture trials was determined with a residual analysis performed on all force and acceleration waveforms. Force and acceleration data were filtered with a dual pass, 4th order Butterworth filter at each of 14 different cut-off values ranging from 60Hz to 1500Hz. Mean (SD) pre-fracture and fracture optimal cut-off frequencies for the force variables were 605.8 (82.7)Hz and 513.9 (79.5)Hz, respectively. Differences in the optimal cut-off frequency were also found between signals (e.g. Fx (medial-lateral), Fy (superior-inferior), Fz (anterior-posterior)) within the same test. These optimal cut-off frequencies do not universally agree with the recommendations of filtering all upper extremity impact data using a cut-off frequency of 600Hz. This highlights the importance of quantifying the filter frequency cut-offs specific to the instrumentation and experimental set-up. Improper digital filtering may lead to erroneous results and a lack of standardized approaches makes it difficult to compare findings of in-vitro dynamic testing between laboratories. PMID:21903214

  19. Optimal discrimination and classification of neuronal action potential waveforms from multiunit, multichannel recordings using software-based linear filters.

    PubMed

    Gozani, S N; Miller, J P

    1994-04-01

    We describe advanced protocols for the discrimination and classification of neuronal spike waveforms within multichannel electrophysiological recordings. The programs are capable of detecting and classifying the spikes from multiple, simultaneously active neurons, even in situations where there is a high degree of spike waveform superposition on the recording channels. The protocols are based on the derivation of an optimal linear filter for each individual neuron. Each filter is tuned to selectively respond to the spike waveform generated by the corresponding neuron, and to attenuate noise and the spike waveforms from all other neurons. The protocol is essentially an extension of earlier work [1], [13], [18]. However, the protocols extend the power and utility of the original implementations in two significant respects. First, a general single-pass automatic template estimation algorithm was derived and implemented. Second, the filters were implemented within a software environment providing a greatly enhanced functional organization and user interface. The utility of the analysis approach was demonstrated on samples of multiunit electrophysiological recordings from the cricket abdominal nerve cord. PMID:8063302

  20. Design of FIR digital filters for pulse shaping and channel equalization using time-domain optimization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Vaughn, G. L.

    1974-01-01

    Three algorithms are developed for designing finite impulse response digital filters to be used for pulse shaping and channel equalization. The first is the Minimax algorithm which uses linear programming to design a frequency-sampling filter with a pulse shape that approximates the specification in a minimax sense. Design examples are included which accurately approximate a specified impulse response with a maximum error of 0.03 using only six resonators. The second algorithm is an extension of the Minimax algorithm to design preset equalizers for channels with known impulse responses. Both transversal and frequency-sampling equalizer structures are designed to produce a minimax approximation of a specified channel output waveform. Examples of these designs are compared as to the accuracy of the approximation, the resultant intersymbol interference (ISI), and the required transmitted energy. While the transversal designs are slightly more accurate, the frequency-sampling designs using six resonators have smaller ISI and energy values.

  1. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  2. Pareto optimality between width of central lobe and peak sidelobe intensity in the far-field pattern of lossless phase-only filters for enhancement of transverse resolution.

    PubMed

    Mukhopadhyay, Somparna; Hazra, Lakshminarayan

    2015-11-01

    Resolution capability of an optical imaging system can be enhanced by reducing the width of the central lobe of the point spread function. Attempts to achieve the same by pupil plane filtering give rise to a concomitant increase in sidelobe intensity. The mutual exclusivity between these two objectives may be considered as a multiobjective optimization problem that does not have a unique solution; rather, a class of trade-off solutions called Pareto optimal solutions may be generated. Pareto fronts in the synthesis of lossless phase-only pupil plane filters to achieve superresolution with prespecified lower limits for the Strehl ratio are explored by using the particle swarm optimization technique. PMID:26560575

  3. Optimizing multi-step B-side charge separation in photosynthetic reaction centers from Rhodobacter capsulatus.

    PubMed

    Faries, Kaitlyn M; Kressel, Lucas L; Dylla, Nicholas P; Wander, Marc J; Hanson, Deborah K; Holten, Dewey; Laible, Philip D; Kirmaier, Christine

    2016-02-01

    Using high-throughput methods for mutagenesis, protein isolation and charge-separation functionality, we have assayed 40 Rhodobacter capsulatus reaction center (RC) mutants for their P(+)QB(-) yield (P is a dimer of bacteriochlorophylls and Q is a ubiquinone) as produced using the normally inactive B-side cofactors BB and HB (where B is a bacteriochlorophyll and H is a bacteriopheophytin). Two sets of mutants explore all possible residues at M131 (M polypeptide, native residue Val near HB) in tandem with either a fixed His or a fixed Asn at L181 (L polypeptide, native residue Phe near BB). A third set of mutants explores all possible residues at L181 with a fixed Glu at M131 that can form a hydrogen bond to HB. For each set of mutants, the results of a rapid millisecond screening assay that probes the yield of P(+)QB(-) are compared among that set and to the other mutants reported here or previously. For a subset of eight mutants, the rate constants and yields of the individual B-side electron transfer processes are determined via transient absorption measurements spanning 100fs to 50?s. The resulting ranking of mutants for their yield of P(+)QB(-) from ultrafast experiments is in good agreement with that obtained from the millisecond screening assay, further validating the efficient, high-throughput screen for B-side transmembrane charge separation. Results from mutants that individually show progress toward optimization of P(+)HB(-)?P(+)QB(-) electron transfer or initial P*?P(+)HB(-) conversion highlight unmet challenges of optimizing both processes simultaneously. PMID:26658355

  4. Development of a Transcatheter Tricuspid Valve Prosthesis Through Steps of Iterative Optimization and Finite Element Analysis.

    PubMed

    Pott, Desiree; Kütting, Maximilian; Zhong, Zhaoyang; Amerini, Andrea; Spillner, Jan; Autschbach, Rüdiger; Steinseifer, Ulrich

    2015-10-01

    The development of a transcatheter tricuspid valve prosthesis for the treatment of tricuspid regurgitation (TR) is presented. The design process involves an iterative development method based on computed tomography data and different steps of finite element analysis (FEA). The enhanced design consists of two self-expandable stents, one is placed inside the superior vena cava (SVC) for primary device anchoring, the second lies inside the tricuspid valve annulus (TVA). Both stents are connected by flexible connecting struts (CS) to anchor the TVA-stent in the orthotopic position. The iterative development method includes the expansion and crimping of the stents and CS with FEA. Leaflet performance and leaflet-stent interaction were studied by applying the physiologic pressure cycle of the right heart onto the leaflet surfaces. A previously implemented nitinol material model and a new porcine pericardium material model derived from uniaxial tensile tests were used. Maximum strains/stresses were approx. 6.8% for the nitinol parts and 2.9 MPa for the leaflets. Stent displacement because of leaflet movement was ≤1.8 mm at the commissures and the coaptation height was 1.6-3 mm. This led to an overall good performance of the prosthesis. An anatomic study showed a good anatomic fit of the device inside the human right heart. PMID:26378868

  5. Optimization of conditions for the single step IMAC purification of miraculin from Synsepalum dulcificum.

    PubMed

    He, Zuxing; Tan, Joo Shun; Lai, Oi Ming; Ariff, Arbakariya B

    2015-08-15

    In this study, the methods for extraction and purification of miraculin from Synsepalum dulcificum were investigated. For extraction, the effect of different extraction buffers (phosphate buffer saline, Tris-HCl and NaCl) on the extraction efficiency of total protein was evaluated. Immobilized metal ion affinity chromatography (IMAC) with nickel-NTA was used for the purification of the extracted protein, where the influence of binding buffer pH, crude extract pH and imidazole concentration in elution buffer upon the purification performance was explored. The total amount of protein extracted from miracle fruit was found to be 4 times higher using 0.5M NaCl as compared to Tris-HCl and phosphate buffer saline. On the other hand, the use of Tris-HCl as binding buffer gave higher purification performance than sodium phosphate and citrate-phosphate buffers in IMAC system. The optimum purification condition of miraculin using IMAC was achieved with crude extract at pH 7, Tris-HCl binding buffer at pH 7 and the use of 300 mM imidazole as elution buffer, which gave the overall yield of 80.3% and purity of 97.5%. IMAC with nickel-NTA was successfully used as a single step process for the purification of miraculin from crude extract of S. dulcificum. PMID:25794715

  6. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1991-01-01

    Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  7. Rod-filter-field optimization of the J-PARC RF-driven H{sup −} ion source

    SciTech Connect

    Ueno, A. Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2015-04-08

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H{sup −} ion beam of 60mA within normalized emittances of 1.5πmm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500μs×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H{sup −} ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H{sup −} ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM’s gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H{sup −} ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM’s cross-section (magnetmotive force) was indispensable for easy operation with the temperature (T{sub PE}) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM’s cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for T{sub PE} around 60°C.

  8. Control system optimization studies. Volume 2: High frequency cutoff filter analysis

    NASA Technical Reports Server (NTRS)

    Fong, M. H.

    1972-01-01

    The problem of digital implementation of a cutoff filter is approached with consideration to word length, sampling rate, accuracy requirements, computing time and hardware restrictions. Computing time and hardware requirements for four possible programming forms for the linear portions of the filter are determined. Upper bounds for the steady state system output error due to quantization for digital control systems containing a digital network programmed both in the direct form and in the canonical form are derived. This is accomplished by defining a set of error equations in the z domain and then applying the final value theorem to the solution. Quantization error was found to depend upon the digital word length, sampling rate, and system time constants. The error bound developed may be used to estimate the digital word length and sampling rate required to achieve a given system specification. From the quantization error accumulation, computing time and hardware point of view, and the fact that complex poles and zeros must be realized, the canonical form of programming seems preferable.

  9. Influence of simulation time-step (temporal-scale) on optimal parameter estimation and runoff prediction performance in hydrological modeling

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; lvarez-Mozos, Jess; Casal, Javier; Goi, Mikel

    2015-04-01

    Nowadays, most hydrological catchment models are designed to allow their use for streamflow simulation at different time-scales. While this permits models to be applied for broader purposes, it can also be a source of error in hydrological processes simulation at catchment scale. Those errors seem not to affect significantly simple conceptual models, but this flexibility may lead to large behavior errors in physically based models. Equations used in processes such as those related to soil moisture time-variation are usually representative at certain time-scales but they may not characterize properly water transfer in soil layers at larger scales. This effect is especially relevant as we move from detailed hourly scale to daily time-step, which are common time scales used at catchment streamflow simulation for different research and management practices purposes. This study aims to provide an objective methodology to identify the degree of similarity of optimal parameter values when hydrological catchment model calibration is developed at different time-scales. Thus, providing information for an informed discussion of physical parameter significance on hydrological models. In this research, we analyze the influence of time scale simulation on: 1) the optimal values of six highly sensitive parameters of the TOPLATS model and 2) the streamflow simulation efficiency, while optimization is carried out at different time scales. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) has been applied on its lumped version on three catchments of varying size located in northern Spain. The model has its basis on shallow groundwater gradients (related to local topography) that set up spatial patterns of soil moisture and are assumed to control infiltration and runoff during storm events and evaporation and drainage in between storm events. The model calculates the saturated portion of the catchment at each time step based on Topographical Index (TI) intervals. Surface runoff is then calculated at rainfall events proportionally to the saturation degree of the catchment. Separately, baseflow is calculated based on the distance between catchment average water table depth and specific depth at each TI interval. This study focuses on the comparison of hourly and daily simulations for the 2000-2007 time period. An optimization algorithm has been applied to identify the optimal values of the following four soil properties: 1) Brooks-Corey pore size distribution index (?), 2) Bubbling pressure (?c), 3) Saturated soil moisture (?s), 4) Surface saturated hydraulic conductivity (Ks), and two subsurface flow controlling parameters: 1) Subsurface flow at complete saturation (Q0), and 2) Exponential coefficient for TOPMODEL baseflow equation (f). The algorithm was set up to maximize Nash-Sutcliffe Efficiency (NSE) at the catchment outlet. Results presented include the optimal values of each parameter at both hourly and daily time scale. These values provided valuable information to discuss the relative importance of each soil-related model parameter for enhanced streamflow simulation and adequate model response to both surface runoff and baseflow simulation. Catchment baseflow magnitude (Q0) and decay behavior (f) are also proved to require detailed analysis depending on the selected hydrological modeling purpose and corresponding selected time-step. Obtained results showed that different time-scale simulations may require different parameter values for soil properties and catchment behavior characterization in order to properly simulate streamflow at catchment scale. Despite calibrated parameters were soil properties and water flow quantities with physical meaning and defined units, optimum values differed with time-scale and were not always similar to field observations.

  10. Reaction null-space filter: extracting reactionless synergies for optimal postural balance from motion capture data.

    PubMed

    Nenchev, D N; Miyamoto, Y; Iribe, H; Takeuchi, K; Sato, D

    2016-06-01

    This paper introduces the notion of a reactionless synergy: a postural variation for a specific motion pattern/strategy, whereby the movements of the segments do not alter the force/moment balance at the feet. Given an optimal initial posture in terms of stability, a reactionless synergy can ensure optimality throughout the entire movement. Reactionless synergies are derived via a dynamical model wherein the feet are regarded to be unfixed. Though in contrast with the conventional fixed-feet models, this approach has the advantage of exhibiting the reactions at the feet explicitly. The dynamical model also facilitates a joint-space decomposition scheme yielding two motion components: the reactionless synergy and an orthogonal complement responsible for the dynamical coupling between the feet and the support. Since the reactionless synergy provides the basis (a feedforward control component) for optimal balance control, it may play an important role when evaluating balance abnormalities or when assessing optimality in balance control. We show how to apply the proposed method for analysis of motion capture data obtained from three voluntary movement patterns in the sagittal plane: squat, sway, and forward bend. PMID:26273732

  11. Medical image processing using novel wavelet filters based on atomic functions: optimal medical image compression.

    PubMed

    Landin, Cristina Juarez; Reyes, Magally Martinez; Martin, Anabelem Soberanes; Rosas, Rosa Maria Valdovinos; Ramirez, Jose Luis Sanchez; Ponomaryov, Volodymyr; Soto, Maria Dolores Torres

    2011-01-01

    The analysis of different Wavelets including novel Wavelet families based on atomic functions are presented, especially for ultrasound (US) and mammography (MG) images compression. This way we are able to determine with what type of filters Wavelet works better in compression of such images. Key properties: Frequency response, approximation order, projection cosine, and Riesz bounds were determined and compared for the classic Wavelets W9/7 used in standard JPEG2000, Daubechies8, Symlet8, as well as for the complex Kravchenko-Rvachev Wavelets ψ(t) based on the atomic functions up(t),  fup (2)(t), and eup(t). The comparison results show significantly better performance of novel Wavelets that is justified by experiments and in study of key properties. PMID:21431590

  12. Permeability optimization and performance evaluation of hot aerosol filters made using foam incorporated alumina suspension.

    PubMed

    Innocentini, Murilo D M; Rodrigues, Vanessa P; Romano, Roberto C O; Pileggi, Rafael G; Silva, Gracinda M C; Coury, Jos R

    2009-02-15

    Porous ceramic samples were prepared from aqueous foam incorporated alumina suspension for application as hot aerosol filtering membrane. The procedure for establishment of membrane features required to maintain a desired flow condition was theoretically described and experimental work was designed to prepare ceramic membranes to meet the predicted criteria. Two best membranes, thus prepared, were selected for permeability tests up to 700 degrees C and their total and fractional collection efficiencies were experimentally evaluated. Reasonably good performance was achieved at room temperature, while at 700 degrees C, increased permeability was obtained with significant reduction in collection efficiency, which was explained by a combination of thermal expansion of the structure and changes in the gas properties. PMID:18565647

  13. Optimal spatial filtering and transfer function for SAR ocean wave spectra

    NASA Technical Reports Server (NTRS)

    Beal, R. C.; Tilley, D. G.

    1981-01-01

    The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.

  14. Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter

    NASA Astrophysics Data System (ADS)

    Ouerhani, Y.; Jridi, M.; Alfalou, A.; Brosseau, C.

    2013-02-01

    The key outcome of this work is to propose and validate a fast and robust correlation scheme for face recognition applications. The robustness of this fast correlator is ensured by an adapted pre-processing step for the target image allowing us to minimize the impact of its (possibly noisy and varying) amplitude spectrum information. A segmented composite filter is optimized, at the very outset of its fabrication, by weighting each reference with a specific coefficient which is proportional to the occurrence probability. A hierarchical classification procedure (called two-level decision tree learning approach) is also used in order to speed up the recognition procedure. Experimental results validating our approach are obtained with a prototype based on GPU implementation of the all-numerical correlator using the NVIDIA GPU GeForce 8400GS processor and test samples from the Pointing Head Pose Image Database (PHPID), e.g. true recognition rates larger than 85% with a run time lower than 120 ms have been obtained using fixed images from the PHPID, true recognition rates larger than 77% using a real video sequence with 2 frame per second when the database contains 100 persons. Besides, it has been shown experimentally that the use of more recent GPU processor like NVIDIA-GPU Quadro FX 770M can perform the recognition of 4 frame per second with the same length of database.

  15. Long-range high spatial resolution optical frequency-domain reflectometry based on optimized deskew filter method

    NASA Astrophysics Data System (ADS)

    Ding, Zhenyang; Du, Yang; Liu, Tiegen; Yao, X. Steve; Feng, Bowen; Liu, Kun; Jiang, Junfeng

    2014-11-01

    We present a long-range high spatial resolution optical frequency-domain reflectometry (OFDR) based on optimized deskew filter method. In proposed method, the frequency tuning nonlinear phase obtained from an auxiliary interferometer is used to compensate the nonlinear phase of the beating signals generated from a main OFDR interferometer using a deskew filter. The method can be applied for the entire spatial domain of the OFDR signals at once with a high computational efficiency. In addition, we apply the methods of higher orders of Taylor expansion and cepstrum analysis to improve the estimation accuracy of nonlinear phase. We experimentally achieve a measurement range of 80 km and a spatial resolution of 20 cm and 80 cm at distances of 10 km and 80 km that is about 187 times enhancement when compared with that of the same OFDR trace without nonlinearity compensation. The improved performance of the OFDR with the high spatial resolution, long measurement range and short process time will lead to practical applications in real-time monitoring and measurement of the optical fiber communication and sensing systems.

  16. An optimized DSP implementation of adaptive filtering and ICA for motion artifact reduction in ambulatory ECG monitoring.

    PubMed

    Berset, Torfinn; Geng, Di; Romero, Iaki

    2012-01-01

    Noise from motion artifacts is currently one of the main challenges in the field of ambulatory ECG recording. To address this problem, we propose the use of two different approaches. First, an adaptive filter with electrode-skin impedance as a reference signal is described. Secondly, a multi-channel ECG algorithm based on Independent Component Analysis is introduced. Both algorithms have been designed and further optimized for real-time work embedded in a dedicated Digital Signal Processor. We show that both algorithms improve the performance of a beat detection algorithm when applied in high noise conditions. In addition, an efficient way of choosing this methods is suggested with the aim of reduce the overall total system power consumption. PMID:23367417

  17. Optimized, one-step, recovery-enrichment broth for enhanced detection of Listeria monocytogenes in pasteurized milk and hot dogs.

    PubMed

    Knabel, Stephen J

    2002-01-01

    A one-step, recovery-enrichment broth, optimized Penn State University (oPSU) broth, was developed to consistently detect low levels of injured and uninjured Listeria monocytogenes cells in ready-to-eat foods. The oPSU broth contains special selective agents that inhibit growth of background flora without inhibiting recovery of injured Listeria cells. After recovery in the anaerobic section of oPSU broth, Listeria cells migrated to the surface, forming a black zone. This migration separated viable from nonviable cells and the food matrix, thereby reducing inhibitors that prevent detection by molecular methods. The high Listeria-to-background ratio in the black zone resulted in consistent detection of low levels of L. monocytogenes in pasteurized foods by both cultural and molecular methods, and greatly reduced both false-negative and false-positive results. oPSU broth does not require transfer to a secondary enrichment broth, making it less laborious and less subject to external contamination than 2-step enrichment protocols. Addition of 150mM D-serine prevented germination of Bacillus spores, but not the growth of vegetative cells. Replacement of D-serine with 12 mg/L acriflavin inhibited growth of vegetative cells of Bacillus spp. without inhibiting recovery of injured Listeria cells. oPSU broth may allow consistent detection of low levels of injured and uninjured cells of L. monocytogenes in pasteurized foods containing various background microflora. PMID:11990038

  18. Shuttle filter study. Volume 1: Characterization and optimization of filtration devices

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A program to develop a new technology base for filtration equipment and comprehensive fluid particulate contamination management techniques was conducted. The study has application to the systems used in the space shuttle and space station projects. The scope of the program is as follows: (1) characterization and optimization of filtration devices, (2) characterization of contaminant generation and contaminant sensitivity at the component level, and (3) development of a comprehensive particulate contamination management plane for space shuttle fluid systems.

  19. Designing spectrum-splitting dichroic filters to optimize current-matched photovoltaics.

    PubMed

    Miles, Alexander; Cocilovo, Byron; Wheelwright, Brian; Pan, Wei; Tweet, Doug; Norwood, Robert A

    2016-03-10

    We have developed an approach for designing a dichroic coating to optimize performance of current-matched multijunction photovoltaic cells while diverting unused light. By matching the spectral responses of the photovoltaic cells and current matching them, substantial improvement to system efficiencies is shown to be possible. A design for use in a concentrating hybrid solar collector was produced by this approach, and is presented. Materials selection, design methodology, and tilt behavior on a curved substrate are discussed. PMID:26974772

  20. Drying process optimization for an API solvate using heat transfer model of an agitated filter dryer.

    PubMed

    Nere, Nandkishor K; Allen, Kimberley C; Marek, James C; Bordawekar, Shailendra V

    2012-10-01

    Drying an early stage active pharmaceutical ingredient candidate required excessively long cycle times in a pilot plant agitated filter dryer. The key to faster drying is to ensure sufficient heat transfer and minimize mass transfer limitations. Designing the right mixing protocol is of utmost importance to achieve efficient heat transfer. To this order, a composite model was developed for the removal of bound solvent that incorporates models for heat transfer and desolvation kinetics. The proposed heat transfer model differs from previously reported models in two respects: it accounts for the effects of a gas gap between the vessel wall and solids on the overall heat transfer coefficient, and headspace pressure on the mean free path length of the inert gas and thereby on the heat transfer between the vessel wall and the first layer of solids. A computational methodology was developed incorporating the effects of mixing and headspace pressure to simulate the drying profile using a modified model framework within the Dynochem software. A dryer operational protocol was designed based on the desolvation kinetics, thermal stability studies of wet and dry cake, and the understanding gained through model simulations, resulting in a multifold reduction in drying time. PMID:22753308

  1. Bounds on the performance of particle filters

    NASA Astrophysics Data System (ADS)

    Snyder, C.; Bengtsson, T.

    2014-12-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. A persistent issue with all particle filters is degeneracy of the importance weights, where one or a few particles receive almost all the weight. Considering single-step filters such as the equivalent-weights or implicit particle filters (that is, those in which the particles and weights at time tk depend only on the observations at tk and the particles and weights at tk-1), two results provide a bound on their performance. First, the optimal proposal minimizes the variance of the importance weights not only over draws of the particles at tk, but also over draws from the joint proposal for tk-1 and tk. This shows that a particle filter using the optimal proposal will have minimal degeneracy relative to all other single-step filters. Second, the asymptotic results of Bengtsson et al. (2008) and Snyder et al. (2008) also hold rigorously for the optimal proposal in the case of linear, Gaussian systems. The number of particles necessary to avoid degeneracy must increase exponentially with the variance of the incremental importance weights. In the simplest examples, that variance is proportional to the dimension of the system, though in general it depends on other factors, including the characteristics of the observing network. A rough estimate indicates that single-step particle filter applied to global numerical weather prediction will require very large numbers of particles.

  2. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  3. Metrics For Comparing Plasma Mass Filters

    SciTech Connect

    Abraham J. Fetterman and Nathaniel J. Fisch

    2012-08-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter. __________________________________________________

  4. Metrics for comparing plasma mass filters

    SciTech Connect

    Fetterman, Abraham J.; Fisch, Nathaniel J.

    2011-10-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

  5. Production of algal biodiesel from marine macroalgae Enteromorpha compressa by two step process: optimization and kinetic study.

    PubMed

    Suganya, Tamilarasan; Nagendra Gandhi, Nagarajan; Renganathan, Sahadevan

    2013-01-01

    In this investigation, Enteromorpha compressa algal oil with high free fatty acids (FFA) used as a feedstock for biodiesel production. Two step process was developed and kinetic study executed to obtain reaction rate constant for the transesterification reaction. The acid esterification was carried out to reduce FFA from 6.3% to 0.34% with optimized parameters of 1.5% H(2)SO(4), 12:1 methanol-oil ratio, 400 rpm at 60 C and 90 min of reaction time. The maximum biodiesel yield 90.6% was achieved from base transesterification through optimum conditions of 1% NaOH, 9:1 methanol-oil ratio, 600 rpm and 60 C temperature for 70 min. The algal biodiesel was characterized by GC-MS, HPLC and NIR. This transesterification follows first order reaction kinetics and the activation energy was determined as 73,154.89 J/mol. The biodiesel properties were analyzed and found to be within the limits of American standards. Hence, E. compressa serves as a valuable renewable raw-material for biodiesel production. PMID:23201520

  6. An optimal modeling of multidimensional wave digital filtering network for free vibration analysis of symmetrically laminated composite FSDT plates

    NASA Astrophysics Data System (ADS)

    Tseng, Chien-Hsun

    2015-02-01

    The technique of multidimensional wave digital filtering (MDWDF) that builds on traveling wave formulation of lumped electrical elements, is successfully implemented on the study of dynamic responses of symmetrically laminated composite plate based on the first order shear deformation theory. The philosophy applied for the first time in this laminate mechanics relies on integration of certain principles involving modeling and simulation, circuit theory, and MD digital signal processing to provide a great variety of outstanding features. Especially benefited by the conservation of passivity gives rise to a nonlinear programming problem (NLP) for the issue of numerical stability of a MD discrete system. Adopting the augmented Lagrangian genetic algorithm, an effective optimization technique for rapidly achieving solution spaces of NLP models, numerical stability of the MDWDF network is well received at all time by the satisfaction of the Courant-Friedrichs-Levy stability criterion with the least restriction. In particular, optimum of the NLP has led to the optimality of the network in terms of effectively and accurately predicting the desired fundamental frequency, and thus to give an insight into the robustness of the network by looking at the distribution of system energies. To further explore the application of the optimum network, more numerical examples are engaged in efforts to achieve a qualitative understanding of the behavior of the laminar system. These are carried out by investigating various effects based on different stacking sequences, stiffness and span-to-thickness ratios, mode shapes and boundary conditions. Results are scrupulously validated by cross referencing with early published works, which show that the present method is in excellent agreement with other numerical and analytical methods.

  7. On the difficulty to optimally implement the Ensemble Kalman filter: An experiment based on many hydrological models and catchments

    NASA Astrophysics Data System (ADS)

    Thiboult, A.; Anctil, F.

    2015-10-01

    Forecast reliability and accuracy is a prerequisite for successful hydrological applications. This aim may be attained by using data assimilation techniques such as the popular Ensemble Kalman filter (EnKF). Despite its recognized capacity to enhance forecasting by creating a new set of initial conditions, implementation tests have been mostly carried out with a single model and few catchments leading to case specific conclusions. This paper performs an extensive testing to assess ensemble bias and reliability on 20 conceptual lumped models and 38 catchments in the Province of Qubec with perfect meteorological forecast forcing. The study confirms that EnKF is a powerful tool for short range forecasting but also that it requires a more subtle setting than it is frequently recommended. The success of the updating procedure depends to a great extent on the specification of the hyper-parameters. In the implementation of the EnKF, the identification of the hyper-parameters is very unintuitive if the model error is not explicitly accounted for and best estimates of forcing and observation error lead to overconfident forecasts. It is shown that performance are also related to the choice of updated state variables and that all states variables should not systematically be updated. Additionally, the improvement over the open loop scheme depends on the watershed and hydrological model structure, as some models exhibit a poor compatibility with EnKF updating. Thus, it is not possible to conclude in detail on a single ideal manner to identify an optimal implementation; conclusions drawn from a unique event, catchment, or model are likely to be misleading since transferring hyper-parameters from a case to another may be hazardous. Finally, achieving reliability and bias jointly is a daunting challenge as the optimization of one score is done at the cost of the other.

  8. Detection and measurement of rheumatoid bone and joint lesions of fingers by tomosynthesis: a phantom study for reconstruction filter setting optimization.

    PubMed

    Ono, Yohei; Kamishima, Tamotsu; Yasojima, Nobutoshi; Tamura, Kenichi; Tsutsumi, Kaori

    2016-01-01

    Rheumatoid arthritis (RA) is a systemic disease that is caused by autoimmunity. RA causes synovial proliferation, which may result in bone erosion and joint space narrowing in the affected joint. Tomosynthesis is a promising modality which may detect early bone lesions such as small bone erosion and slight joint space narrowing. Nevertheless, so far, the optimal reconstruction filter for detection of early bone lesions of fingers on tomosynthesis has not yet been known. Our purpose in this study was to determine an optimal reconstruction filter setting by using a bone phantom. We obtained images of a cylindrical phantom with holes simulating bone erosions (diameters of 0.6, 0.8, 1.0, 1.2, and 1.4 mm) and joint spaces by aligning two phantoms (space widths from 0.5 to 5.0 mm with 0.5 mm intervals), examining six reconstruction filters by using tomosynthesis. We carried out an accuracy test of the bone erosion size and joint space width, done by one radiological technologist, and a test to assess the visibility of bone erosion, done by five radiological technologists. No statistically significant difference was observed in the measured bone erosion size and joint space width among all of the reconstruction filters. In the visibility assessment test, reconstruction filters of Thickness+- and Thickness-- were among the best statistically in all characteristics except the signal-to-noise ratio. The Thickness+- and Thickness-- reconstruction filter may be optimal for evaluation of RA bone lesions of small joints in tomosynthesis. PMID:26092218

  9. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  10. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy 261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; 110 or 258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; 399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; 502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  11. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  12. Reducing radiation dose by application of optimized low-energy x-ray filters to K-edge imaging with a photon counting detector.

    PubMed

    Choi, Yu-Na; Lee, Seungwan; Kim, Hee-Joung

    2016-01-21

    K-edge imaging with photon counting x-ray detectors (PCXDs) can improve image quality compared with conventional energy integrating detectors. However, low-energy x-ray photons below the K-edge absorption energy of a target material do not contribute to image formation in the K-edge imaging and are likely to be completely absorbed by an object. In this study, we applied x-ray filters to the K-edge imaging with a PCXD based on cadmium zinc telluride for reducing radiation dose induced by low-energy x-ray photons. We used aluminum (Al) filters with different thicknesses as the low-energy x-ray filters and implemented the iodine K-edge imaging with an energy bin of 34-48 keV at the tube voltages of 50, 70 and 90 kVp. The effects of the low-energy x-ray filters on the K-edge imaging were investigated with respect to signal-difference-to-noise ratio (SDNR), entrance surface air kerma (ESAK) and figure of merit (FOM). The highest value of SDNR was observed in the K-edge imaging with a 2?mm Al filter, and the SDNR decreased as a function of the filter thicknesses. Compared to the K-edge imaging with a 2?mm Al filter, the ESAK was reduced by 66%, 48% and 39% in the K-edge imaging with a 12?mm Al filter for 50 kVp, 70 kVp and 90 kVp, respectively. The FOM values, which took into account the ESAK and SDNR, were maximized for 8, 6 to 8 and 4?mm Al filters at 50 kVp, 70 kVp and 90 kVp, respectively. We concluded that the use of an optimal low-energy filter thickness, which was determined by maximizing the FOM, could significantly reduce radiation dose while maintaining image quality in the K-edge imaging with the PCXD. PMID:26733235

  13. Reducing radiation dose by application of optimized low-energy x-ray filters to K-edge imaging with a photon counting detector

    NASA Astrophysics Data System (ADS)

    Choi, Yu-Na; Lee, Seungwan; Kim, Hee-Joung

    2016-01-01

    K-edge imaging with photon counting x-ray detectors (PCXDs) can improve image quality compared with conventional energy integrating detectors. However, low-energy x-ray photons below the K-edge absorption energy of a target material do not contribute to image formation in the K-edge imaging and are likely to be completely absorbed by an object. In this study, we applied x-ray filters to the K-edge imaging with a PCXD based on cadmium zinc telluride for reducing radiation dose induced by low-energy x-ray photons. We used aluminum (Al) filters with different thicknesses as the low-energy x-ray filters and implemented the iodine K-edge imaging with an energy bin of 34–48 keV at the tube voltages of 50, 70 and 90 kVp. The effects of the low-energy x-ray filters on the K-edge imaging were investigated with respect to signal-difference-to-noise ratio (SDNR), entrance surface air kerma (ESAK) and figure of merit (FOM). The highest value of SDNR was observed in the K-edge imaging with a 2 mm Al filter, and the SDNR decreased as a function of the filter thicknesses. Compared to the K-edge imaging with a 2 mm Al filter, the ESAK was reduced by 66%, 48% and 39% in the K-edge imaging with a 12 mm Al filter for 50 kVp, 70 kVp and 90 kVp, respectively. The FOM values, which took into account the ESAK and SDNR, were maximized for 8, 6 to 8 and 4 mm Al filters at 50 kVp, 70 kVp and 90 kVp, respectively. We concluded that the use of an optimal low-energy filter thickness, which was determined by maximizing the FOM, could significantly reduce radiation dose while maintaining image quality in the K-edge imaging with the PCXD.

  14. Optimal cut-off points for two-step strategy in screening of undiagnosed diabetes: a population-based study in China.

    PubMed

    Ye, Zhen; Cong, Liming; Ding, Gangqiang; Yu, Min; Zhang, Xinwei; Hu, Ruying; Wu, Jianjun; Fang, Le; Wang, Hao; Zhang, Jie; He, Qingfang; Su, Danting; Zhao, Ming; Wang, Lixin; Gong, Weiwei; Xiao, Yuanyuan; Liang, Mingbin; Pan, Jin

    2014-01-01

    To identify optimal cut-off points of fasting plasma glucose for two-step strategy in screening of undiagnosed diabetes in Chinese people, data were selected from two cross-sectional studies of Metabolic Syndrome in Zhejiang Province of China, Zhejiang Statistical Yearbook (2010), and published literatures. Two-step strategy was used among 17437 subjects sampled from population to screen undiagnosed diabetes. Effectiveness (proportion of cases identified), costs (including medical and non-medical costs), and efficiency (cost per case identified) of these different two-step screening strategies were evaluated. This study found the sensitivities of all the two-step screening strategies with further Oral Glucose Tolerance Test (OGTT) at different Fasting Plasma Glucose (FPG) cut-off points from 5.0 to 7.0 (mmol/L) ranged from 0.66 to 0.91. For the FPG point of 5.0 mmol/L, 91 percent of undiagnosed cases were identified. The total cost of detecting one undiagnosed diabetes case ranged from 547.1 to 1294.5 CNY/case, and the strategy with FPG at cut-off point of 6.1 (mmol/L) resulted in the least cost. Considering both sensitivity and cost of screening diabetes, FPG cut-off point at 5.4 mmol/L was optimized for the two-step strategy. In conclusion, different optimal cut-off points of FPG for two-step strategy in screening of undiagnosed diabetes should be used for different screening purposes. PMID:24609110

  15. Optimal Cut-Off Points for Two-Step Strategy in Screening of Undiagnosed Diabetes: A Population-Based Study in China

    PubMed Central

    Ye, Zhen; Cong, Liming; Ding, Gangqiang; Yu, Min; Zhang, Xinwei; Hu, Ruying; Wu, Jianjun; Fang, Le; Wang, Hao; Zhang, Jie; He, Qingfang; Su, Danting; Zhao, Ming; Wang, Lixin; Gong, Weiwei; Xiao, Yuanyuan; Liang, Mingbin; Pan, Jin

    2014-01-01

    To identify optimal cut-off points of fasting plasma glucose for two-step strategy in screening of undiagnosed diabetes in Chinese people, data were selected from two cross-sectional studies of Metabolic Syndrome in Zhejiang Province of China, Zhejiang Statistical Yearbook (2010), and published literatures. Two-step strategy was used among 17437 subjects sampled from population to screen undiagnosed diabetes. Effectiveness (proportion of cases identified), costs (including medical and non-medical costs), and efficiency (cost per case identified) of these different two-step screening strategies were evaluated. This study found the sensitivities of all the two-step screening strategies with further Oral Glucose Tolerance Test (OGTT) at different Fasting Plasma Glucose (FPG) cut-off points from 5.0 to 7.0 (mmol/L) ranged from 0.66 to 0.91. For the FPG point of 5.0 mmol/L, 91 percent of undiagnosed cases were identified. The total cost of detecting one undiagnosed diabetes case ranged from 547.1 to 1294.5 CNY/case, and the strategy with FPG at cut-off point of 6.1 (mmol/L) resulted in the least cost. Considering both sensitivity and cost of screening diabetes, FPG cut-off point at 5.4 mmol/L was optimized for the two-step strategy. In conclusion, different optimal cut-off points of FPG for two-step strategy in screening of undiagnosed diabetes should be used for different screening purposes. PMID:24609110

  16. Two-step biodiesel production from Calophyllum inophyllum oil: optimization of modified ?-zeolite catalyzed pre-treatment.

    PubMed

    SathyaSelvabala, Vasanthakumar; Selvaraj, Dinesh Kirupha; Kalimuthu, Jalagandeeswaran; Periyaraman, Premkumar Manickam; Subramanian, Sivanesan

    2011-01-01

    In this study, a two-step process was developed to produce biodiesel from Calophyllum inophyllum oil. Pre-treatment with phosphoric acid modified ?-zeolite in acid catalyzed esterification process preceded by transesterification which was done using conventional alkali catalyst potassium hydroxide (KOH). The objective of this study is to investigate the relationship between the reaction temperatures, reaction time and methanol to oil molar ratio in the pre-treatment step. Central Composite Design (CCD) and Response Surface Methodology (RSM) were utilized to determine the best operating condition for the pre-treatment step. Biodiesel produced by this process was tested for its fuel properties. PMID:20833536

  17. Nonlinear Attitude Filtering Methods

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Crassidis, John L.; Cheng, Yang

    2005-01-01

    This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.

  18. Optimizing mini-ridge filter thickness to reduce proton treatment times in a spot-scanning synchrotron system

    SciTech Connect

    Courneyea, Lorraine; Beltran, Chris Tseung, Hok Seum Wan Chan; Yu, Juan; Herman, Michael G.

    2014-06-15

    Purpose: Study the contributors to treatment time as a function of Mini-Ridge Filter (MRF) thickness to determine the optimal choice for breath-hold treatment of lung tumors in a synchrotron-based spot-scanning proton machine. Methods: Five different spot-scanning nozzles were simulated in TOPAS: four with MRFs of varying maximal thicknesses (6.15–24.6 mm) and one with no MRF. The MRFs were designed with ridges aligned along orthogonal directions transverse to the beam, with the number of ridges (4–16) increasing with MRF thickness. The material thickness given by these ridges approximately followed a Gaussian distribution. Using these simulations, Monte Carlo data were generated for treatment planning commissioning. For each nozzle, standard and stereotactic (SR) lung phantom treatment plans were created and assessed for delivery time and plan quality. Results: Use of a MRF resulted in a reduction of the number of energy layers needed in treatment plans, decreasing the number of synchrotron spills needed and hence the treatment time. For standard plans, the treatment time per field without a MRF was 67.0 ± 0.1 s, whereas three of the four MRF plans had treatment times of less than 20 s per field; considered sufficiently low for a single breath-hold. For SR plans, the shortest treatment time achieved was 57.7 ± 1.9 s per field, compared to 95.5 ± 0.5 s without a MRF. There were diminishing gains in time reduction as the MRF thickness increased. Dose uniformity of the PTV was comparable across all plans; however, when the plans were normalized to have the same coverage, dose conformality decreased with MRF thickness, as measured by the lung V20%. Conclusions: Single breath-hold treatment times for plans with standard fractionation can be achieved through the use of a MRF, making this a viable option for motion mitigation in lung tumors. For stereotactic plans, while a MRF can reduce treatment times, multiple breath-holds would still be necessary due to the limit imposed by the proton extraction time. To balance treatment time and normal tissue dose, the ideal MRF choice was shown to be the thinnest option that is able to achieve the desired breath-hold timing.

  19. Thickness optimization of drilling fluid filter cakes for cement slurry filtrate control and long-term zonal isolation

    SciTech Connect

    Griffith, J.E.; Osisanya, S.

    1995-12-31

    In this paper, the long-term isolation characteristics of two typical filter-cake systems in a gas or water environment are investigated. The test models were designed to measure the sealing capability of a premium cement and filter-cake system used to prevent hydraulic communication at a permeable-nonpermeable boundary. The test models represented the area of a sandstone/shale layer in an actual well. In a real well, sandstone is a water- or gas-bearing formation, and sealing the annulus at the shale formation would prevent hydraulic communication to an upper productive zone. To simulate these conditions, the test models remained in a gas or water environment at either 80 or 150 F for periods of 3, 4, 30, and 90 days before the hydraulic isolation measurements were conducted. Models without filter cake, consisting of 100% cement, were tested for zonal isolation with the filter-cake models to provide reference points. These results show how critical filter-cake removal is to the long-term sealing of the cemented annulus. Results indicate that complete removal of the filter cake provides the greatest resistance to fluid communication in most of the cases studied.

  20. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    PubMed

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. PMID:26803003

  1. Variational Particle Filter for Imperfect Models

    NASA Astrophysics Data System (ADS)

    Baehr, C.

    2012-12-01

    Whereas classical data processing techniques work with perfect models geophysical sciences have to deal with imperfect models with spatially structured errors. For the perfect model cases, in terms of Mean-Field Markovian processes, the optimal filter is known: the Kalman estimator is the answer to the linearGaussian problem and in the general case Particle approximations are the empirical solutions to the optimal estimator. We will present another way to decompose the Bayes rule, using an one step ahead observation. This method is well adapted to the strong nonlinear or chaotic systems. Then, in order to deal with imperfect model, we suggest in this presentation to learn the (large scale) model errors using a variational correction before the resampling step of the non-linear filtering. This procedure replace the a-priori Markovian transition by a kernel conditioned to the observations. This supplementary step may be read as the use of variational particles approximation. For the numerical applications, we have chosen to show the impact of our method, first on a simple marked Poisson process with Gaussian observation noises (the time-exponential jumps are considered as model errors) and then on a 2D shallow water experiment in a closed basin, with some falling droplets as model errors.; Marked Poisson process with Gaussian observation noise filtered by four methods: classical Kalman filter, genetic particle filter, trajectorial particle filter and Kalman-particle filter. All use only 10 particles. ; 2D Shallow Water simulation with droplets errors. Results of a classical 3DVAR and of our VarPF (10 particles).

  2. Filter and method of fabricating

    DOEpatents

    Janney, Mark A.

    2006-02-14

    A method of making a filter includes the steps of: providing a substrate having a porous surface; applying to the porous surface a coating of dry powder comprising particles to form a filter preform; and heating the filter preform to bind the substrate and the particles together to form a filter.

  3. Development of an optimal automatic control law and filter algorithm for steep glideslope capture and glideslope tracking

    NASA Technical Reports Server (NTRS)

    Halyo, N.

    1976-01-01

    A digital automatic control law to capture a steep glideslope and track the glideslope to a specified altitude is developed for the longitudinal/vertical dynamics of a CTOL aircraft using modern estimation and control techniques. The control law uses a constant gain Kalman filter to process guidance information from the microwave landing system, and acceleration from body mounted accelerometer data. The filter outputs navigation data and wind velocity estimates which are used in controlling the aircraft. Results from a digital simulation of the aircraft dynamics and the control law are presented for various wind conditions.

  4. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    SciTech Connect

    Omelyan, Igor E-mail: omelyan@icmp.lviv.ua; Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G 2G8; Institute for Condensed Matter Physics, National Academy of Sciences of Ukraine, 1 Svientsitskii Street, Lviv 79011 ; Kovalenko, Andriy; Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G 2G8

    2013-12-28

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nos-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics steered by effective solvation forces allows huge outer time steps up to tens of picoseconds without affecting the equilibrium and conformational properties, and thus provides a 100- to 500-fold effective speedup in comparison to conventional MD with explicit solvent. With the statistical-mechanical 3D-RISM-KH account for effective solvation forces, the method provides efficient sampling of biomolecular processes with slow and/or rare solvation events such as conformational transitions of hydrated alanine dipeptide with the mean life times ranging from 30 ps up to 10 ns for flip-flop conformations, and is particularly beneficial for biomolecular systems with exchange and localization of solvent and ions, ligand binding, and molecular recognition.

  5. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    NASA Astrophysics Data System (ADS)

    Omelyan, Igor; Kovalenko, Andriy

    2013-12-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nos-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics steered by effective solvation forces allows huge outer time steps up to tens of picoseconds without affecting the equilibrium and conformational properties, and thus provides a 100- to 500-fold effective speedup in comparison to conventional MD with explicit solvent. With the statistical-mechanical 3D-RISM-KH account for effective solvation forces, the method provides efficient sampling of biomolecular processes with slow and/or rare solvation events such as conformational transitions of hydrated alanine dipeptide with the mean life times ranging from 30 ps up to 10 ns for "flip-flop" conformations, and is particularly beneficial for biomolecular systems with exchange and localization of solvent and ions, ligand binding, and molecular recognition.

  6. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: accelerating with advanced extrapolation of effective solvation forces.

    PubMed

    Omelyan, Igor; Kovalenko, Andriy

    2013-12-28

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nos-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics steered by effective solvation forces allows huge outer time steps up to tens of picoseconds without affecting the equilibrium and conformational properties, and thus provides a 100- to 500-fold effective speedup in comparison to conventional MD with explicit solvent. With the statistical-mechanical 3D-RISM-KH account for effective solvation forces, the method provides efficient sampling of biomolecular processes with slow and/or rare solvation events such as conformational transitions of hydrated alanine dipeptide with the mean life times ranging from 30 ps up to 10 ns for "flip-flop" conformations, and is particularly beneficial for biomolecular systems with exchange and localization of solvent and ions, ligand binding, and molecular recognition. PMID:24387356

  7. Estimating model parameters for an impact-produced shock-wave simulation: Optimal use of partial data with the extended Kalman filter

    SciTech Connect

    Kao, Jim . E-mail: kao@lanl.gov; Flicker, Dawn; Ide, Kayo; Ghil, Michael

    2006-05-20

    This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from a single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.

  8. Optimization, physicochemical characterization and in vivo assessment of spray dried emulsion: A step toward bioavailability augmentation and gastric toxicity minimization.

    PubMed

    Mehanna, Mohammed M; Alwattar, Jana K; Elmaradny, Hoda A

    2015-12-30

    The limited solubility of BCS class II drugs diminishes their dissolution and thus reduces their bioavailability. Our aim in this study was to develop and optimize a spray dried emulsion containing indomethacin as a model for Class II drugs, Labrasol()/Transuctol() mixture as the oily phase, and maltodextrin as a solid carrier. The optimization was carried out using a 2(3) full factorial design based on two independent variables, the percentage of carrier and concentration of Poloxamer() 188. The effect of the studied parameters on the spray dried yield, loading efficiency and in vitro release were thoroughly investigated. Furthermore, physicochemical characterization of the optimized formulation was performed. In vivo bioavailability, ulcerogenic capability and histopathological features were assessed. The results obtained pointed out that poloxamer 188 concentration in the formulation was the predominant factor affecting the dissolution release, whereas the drug loading was driven by the carrier concentration added. Moreover, the yield demonstrated a drawback by increasing both independent variables studied. The optimized formulation presented a complete release within two minutes thus suggesting an immediate release pattern as well, the formulation revealed to be uniform spherical particles with an average size of 7.5?m entrapping the drug in its molecular state as demonstrated by the DSC and FTIR studies. The in vivo evaluation, demonstrated a 10-fold enhancement in bioavailability of the optimized formulation, with absence of ulcerogenic side effect compared to the marketed product. The results provided an evidence for the significance of spray dried emulsion as a leading strategy for improving the solubility and enhancing the bioavailability of class II drugs. PMID:26561726

  9. Modeling filters for formation of mono-energetic neutron beams in the research reactor IRT MEPhI and optimization of radiation shielding for liquid-xenon detector

    SciTech Connect

    Ivakhin, S. V.; Tikhomirov, G. V.; Bolozdynya, A. I.; Efremenko, Y. V.; Akimov, D. Y.; Stekhanov, V. N.

    2012-07-01

    The paper considers formation of mono-energetic neutron beams at the entrance of experimental channels in research reactors for various applications. The problem includes the following steps: 1. Full-scale mathematical model of the research IRT MEPhI was developed for numerical evaluations of neutron spectra and neutron spatial distribution in the area of experimental channels. 2. Modeling of filters in the channel to shift neutron spectrum towards the required mono-energetic line was performed. 3. Some characteristics of neutron beams at the entrance of detector were evaluated. The filter materials were selected. The calculations were carried out with application of the computer code based on the high-precision Monte-Carlo code MCNP. As a result, mathematical model was created for the filter which is able to form mono-energetic (24 keV) neutron beam. The study was carried out within the frames of the research project on development of Russian emission detector with liquid noble gas to observe rare processes of neutrino scattering and particles of hypothetical dark matter in atomic nuclei. (authors)

  10. SU-E-T-23: A Novel Two-Step Optimization Scheme for Tandem and Ovoid (T and O) HDR Brachytherapy Treatment for Locally Advanced Cervical Cancer

    SciTech Connect

    Sharma, M; Todor, D; Fields, E

    2014-06-01

    Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 45 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with excellent CTV coverage, average D90 of 96% (range 91102), sigmoid D2cc was reduced on average by 37% (range 1673), bladder by 28% (range 2047) and rectum by 27% (range 1545). Similar reductions were obtained on plans with good coverage, with an average D90 of 93% (range 9099). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 45 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.

  11. Pixelated filters for spatial imaging

    NASA Astrophysics Data System (ADS)

    Mathieu, Karine; Lequime, Michel; Lumeau, Julien; Abel-Tiberini, Laetitia; Savin De Larclause, Isabelle; Berthon, Jacques

    2015-10-01

    Small satellites are often used by spatial agencies to meet scientific spatial mission requirements. Their payloads are composed of various instruments collecting an increasing amount of data, as well as respecting the growing constraints relative to volume and mass; So small-sized integrated camera have taken a favored place among these instruments. To ensure scene specific color information sensing, pixelated filters seem to be more attractive than filter wheels. The work presented here, in collaboration with Institut Fresnel, deals with the manufacturing of this kind of component, based on thin film technologies and photolithography processes. CCD detectors with a pixel pitch about 30 ?m were considered. In the configuration where the matrix filters are positioned the closest to the detector, the matrix filters are composed of 2x2 macro pixels (e.g. 4 filters). These 4 filters have a bandwidth about 40 nm and are respectively centered at 550, 700, 770 and 840 nm with a specific rejection rate defined on the visible spectral range [500 - 900 nm]. After an intense design step, 4 thin-film structures have been elaborated with a maximum thickness of 5 ?m. A run of tests has allowed us to choose the optimal micro-structuration parameters. The 100x100 matrix filters prototypes have been successfully manufactured with lift-off and ion assisted deposition processes. High spatial and spectral characterization, with a dedicated metrology bench, showed that initial specifications and simulations were globally met. These excellent performances knock down the technological barriers for high-end integrated specific multi spectral imaging.

  12. [Reduction of livestock-associated methicillin-resistant staphylococcus aureus (LA-MRSA) in the exhaust air of two piggeries by a bio-trickling filter and a biological three-step air cleaning system].

    PubMed

    Clauss, Marcus; Schulz, Jochen; Stratmann-Selke, Janin; Decius, Maja; Hartung, Jrg

    2013-01-01

    "Livestock-associated" Methicillin-resistent Staphylococcus aureus (LA-MRSA) are frequently found in the air of piggeries, are emitted into the ambient air of the piggeries and may also drift into residential areas or surrounding animal husbandries.. In order to reduce emissions from animal houses such as odour, gases and dust different biological air cleaning systems are commercially available. In this study the retention efficiencies for the culturable LA-MRSA of a bio-trickling filter and a combined three step system, both installed at two different piggeries, were investigated. Raw gas concentrations for LA-MRSA of 2.1 x 10(2) cfu/m3 (biotrickling filter) and 3.9 x 10(2) cfu/m3 (three step system) were found. The clean gas concentrations were in each case approximately one power of ten lower. Both systems were able to reduce the number of investigated bacteria in the air of piggeries on average about 90%. The investigated systems can contribute to protect nearby residents. However, considerable fluctuations of the emissions can occur. PMID:23540196

  13. Unconditionally energy stable time stepping scheme for Cahn-Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    NASA Astrophysics Data System (ADS)

    Tavakoli, Rouhollah

    2016-01-01

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn-Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results.

  14. Definitive Screening Design Optimization of Mass Spectrometry Parameters for Sensitive Comparison of Filter and Solid Phase Extraction Purified, INLIGHT Plasma N-Glycans.

    PubMed

    Hecht, Elizabeth S; McCord, James P; Muddiman, David C

    2015-07-21

    High-throughput, quantitative processing of N-linked glycans would facilitate large-scale studies correlating the glycome with disease and open the field to basic and applied researchers. We sought to meet these goals by coupling filter-aided-N-glycan separation (FANGS) to the individuality normalization when labeling with glycan hydrazide tags (INLIGHT) for analysis of plasma. A quantitative comparison of this method was conducted against solid phase extraction (SPE), a ubiquitous and trusted method for glycan purification. We demonstrate that FANGS-INLIGHT purification was not significantly different from SPE in terms of glycan abundances, variability, functional classes, or molecular weight distributions. Furthermore, to increase the depth of glycome coverage, we executed a definitive screening design of experiments (DOE) to optimize the MS parameters for glycan analyses. We optimized MS parameters across five N-glycan responses using a standard glycan mixture, translated these to plasma and achieved up to a 3-fold increase in ion abundances. PMID:26086806

  15. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Three-Step Growth Optimization of AlN Epilayers by MOCVD

    NASA Astrophysics Data System (ADS)

    Peng, Ming-Zeng; Guo, Li-Wei; Zhang, Jie; Yu, Nai-Sen; Zhu, Xue-Liang; Yan, Jian-Feng; Ge, Bin-Hui; Jia, Hai-Qiang; Chen, Hong; Zhou, Jun-Ming

    2008-06-01

    A three-step growth process is developed for depositing high-quality aluminium-nitride (AlN) epilayers on (001) sapphire by low pressure metalorganic chemical vapour deposition (LP-MOCVD). We adopt a low temperature (LT) AlN nucleation layer (NL), and two high temperature (HT) AlN layers with different V/III ratios. Our results reveal that the optimal NL temperature is 840-880C, and there exists a proper growth switching from low to high V/III ratio for further reducing threading dislocations (TDs). The screw-type TD density of the optimized AlN film is just 7.86 106cm-2, about three orders lower than its edge-type one of 2 109cm-2 estimated by high-resolution x-ray diffraction (HRXRD) and cross-sectional transmission electron microscopy (TEM).

  16. Biological/Biomedical Accelerator Mass Spectrometry Targets. 1. Optimizing the CO2 Reduction Step Using Zinc Dust

    PubMed Central

    2008-01-01

    Biological and biomedical applications of accelerator mass spectrometry (AMS) use isotope ratio mass spectrometry to quantify minute amounts of long-lived radioisotopes such as 14C. AMS target preparation involves first the oxidation of carbon (in sample of interest) to CO2 and second the reduction of CO2 to filamentous, fluffy, fuzzy, or firm graphite-like substances that coat a −400-mesh spherical iron powder (−400MSIP) catalyst. Until now, the quality of AMS targets has been variable; consequently, they often failed to produce robust ion currents that are required for reliable, accurate, precise, and high-throughput AMS for biological/biomedical applications. Therefore, we described our optimized method for reduction of CO2 to high-quality uniform AMS targets whose morphology we visualized using scanning electron microscope pictures. Key features of our optimized method were to reduce CO2 (from a sample of interest that provided 1 mg of C) using 100 ± 1.3 mg of Zn dust, 5 ± 0.4 mg of −400MSIP, and a reduction temperature of 500 °C for 3 h. The thermodynamics of our optimized method were more favorable for production of graphite-coated iron powders (GCIP) than those of previous methods. All AMS targets from our optimized method were of 100% GCIP, the graphitization yield exceeded 90%, and δ13C was −17.9 ± 0.3‰. The GCIP reliably produced strong 12C− currents and accurate and precise Fm values. The observed Fm value for oxalic acid II NIST SRM deviated from its accepted Fm value of 1.3407 by only 0.0003 ± 0.0027 (mean ± SE, n = 32), limit of detection of 14C was 0.04 amol, and limit of quantification was 0.07 amol, and a skilled analyst can prepare as many as 270 AMS targets per day. More information on the physical (hardness/color), morphological (SEMs), and structural (FT-IR, Raman, XRD spectra) characteristics of our AMS targets that determine accurate, precise, and high-hroughput AMS measurement are in the companion paper. PMID:18785761

  17. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization

    PubMed Central

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergencewith at most a linear convergence ratebecause CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742

  18. Optimization of pressurized liquid extraction and purification conditions for gas chromatography-mass spectrometry determination of UV filters in sludge.

    PubMed

    Negreira, N; Rodríguez, I; Rubí, E; Cela, R

    2011-01-14

    This work presents an effective sample preparation method for the determination of eight UV filter compounds, belonging to different chemical classes, in freeze-dried sludge samples. Pressurized liquid extraction (PLE) and gas chromatography-mass spectrometry (GC-MS) were selected as extraction and determination techniques, respectively. Normal-phase, reversed-phase and anionic exchange materials were tested as clean-up sorbents to reduce the complexity of raw PLE extracts. Under final working conditions, graphitized carbon (0.5 g) was used as in-cell purification sorbent for the retention of co-extracted pigments. Thereafter, a solid-phase extraction cartridge, containing 0.5 g of primary secondary amine (PSA) bonded silica, was employed for off-line removal of other interferences, mainly fatty acids, overlapping the chromatographic peaks of some UV filters. Extractions were performed with a n-hexane:dichloromethane (80:20, v:v) solution at 75°C, using a single extraction cycle of 5 min at 1500 psi. Flush volume and purge time were set at 100% and 2 min, respectively. Considering 0.5 g of sample and 1 mL as the final volume of the purified extract, the developed method provided recoveries between 73% and 112%, with limits of quantification (LOQs) from 17 to 61 ng g(-1) and a linear response range up to 10 μg g(-1). Total solvent consumption remained around 30 mL per sample. The analysis of non-spiked samples confirmed the sorption of significant amounts of several UV filters in sludge with average concentrations above 0.6 μg g(-1) for 3-(4-methylbenzylidene) camphor (4-MBC), 2-ethylhexyl-p-methoxycinnamate (EHMC) and octocrylene (OC). PMID:21144528

  19. Pretreatment based on two-step steam explosion combined with an intermediate separation of fiber cells--optimization of fermentation of corn straw hydrolysates.

    PubMed

    Zhang, Yuzhen; Fu, Xiaoguo; Chen, Hongzhang

    2012-10-01

    Pretreatment is necessary for lignocellulose to achieve a highly efficient enzymatic hydrolysis and fermentation. However, coincident with pretreatment, compounds inhibiting microorganism growth are formed. Some tissues or cells, such as thin-walled cells that easily hydrolyze, will be excessively degraded because of the structural heterogeneity of lignocellulose, and some inhibitors will be generated under the same pretreatment conditions. Results showed, compared with one-step steam explosion (1.2 MPa/8 min), two-step steam explosion with an intermediate separation of fiber cells (ISFC) (1.1 Mpa/4 min-ISFC-1.2 MPa/4 min) can increase enzymatic hydrolyzation by 12.82%, reduce inhibitor conversion by 33%, and increase fermentation product (2,3-butanediol) conversion by 209%. Thus, the two-step steam explosion with ISFC process is proposed to optimize the hydrolysis process of lignocellulose by modifying the raw material from the origin. This novel process reduces the inhibitor content, promotes the biotransformation of lignocellulose, and simplifies the process of excluding the detoxification unit operation. PMID:22858472

  20. Characterization and optimization of 2-step MOVPE growth for single-mode DFB or DBR laser diodes

    NASA Astrophysics Data System (ADS)

    Bugge, F.; Mogilatenko, A.; Zeimer, U.; Brox, O.; Neumann, W.; Erbert, G.; Weyers, M.

    2011-01-01

    We have studied the MOVPE regrowth of AlGaAs over a grating for GaAs-based laser diodes with an internal wavelength stabilisation. Growth temperature and aluminium concentration in the regrown layers considerably affect the oxygen incorporation. Structural characterisation by transmission electron microscopy of the grating after regrowth shows the formation of quaternary InGaAsP regions due to the diffusion of indium atoms from the top InGaP layer and As-P exchange processes during the heating-up procedure. Additionally, the growth over such gratings with different facets leads to self-organisation of the aluminium content in the regrown AlGaAs layer, resulting in an additional AlGaAs grating, which has to be taken into account for the estimation of the coupling coefficient. With optimized growth conditions complete distributed feedback laser structures have been grown for different emission wavelengths. At 1062 nm a very high single-frequency output power of nearly 400 mW with a slope efficiency of 0.95 W/A for a 4 μm ridge-waveguide was obtained.

  1. Security: Step by Step

    ERIC Educational Resources Information Center

    Svetcov, Eric

    2005-01-01

    This article provides a list of the essential steps to keeping a school's or district's network safe and sound. It describes how to establish a security architecture and approach that will continually evolve as the threat environment changes over time. The article discusses the methodology for implementing this approach and then discusses the

  2. Security: Step by Step

    ERIC Educational Resources Information Center

    Svetcov, Eric

    2005-01-01

    This article provides a list of the essential steps to keeping a school's or district's network safe and sound. It describes how to establish a security architecture and approach that will continually evolve as the threat environment changes over time. The article discusses the methodology for implementing this approach and then discusses the…

  3. Steps towards verification and validation of the Fetch code for Level 2 analysis, design, and optimization of aqueous homogeneous reactors

    SciTech Connect

    Nygaard, E. T.; Pain, C. C.; Eaton, M. D.; Gomes, J. L. M. A.; Goddard, A. J. H.; Gorman, G.; Tollit, B.; Buchan, A. G.; Cooling, C. M.; Angelo, P. L.

    2012-07-01

    Babcock and Wilcox Technical Services Group (B and W) has identified aqueous homogeneous reactors (AHRs) as a technology well suited to produce the medical isotope molybdenum 99 (Mo-99). AHRs have never been specifically designed or built for this specialized purpose. However, AHRs have a proven history of being safe research reactors. In fact, in 1958, AHRs had 'a longer history of operation than any other type of research reactor using enriched fuel' and had 'experimentally demonstrated to be among the safest of all various type of research reactor now in use [1].' While AHRs have been modeled effectively using simplified 'Level 1' tools, the complex interactions between fluids, neutronics, and solid structures are important (but not necessarily safety significant). These interactions require a 'Level 2' modeling tool. Imperial College London (ICL) has developed such a tool: Finite Element Transient Criticality (FETCH). FETCH couples the radiation transport code EVENT with the computational fluid dynamics code (Fluidity), the result is a code capable of modeling sub-critical, critical, and super-critical solutions in both two-and three-dimensions. Using FETCH, ICL researchers and B and W engineers have studied many fissioning solution systems include the Tokaimura criticality accident, the Y12 accident, SILENE, TRACY, and SUPO. These modeling efforts will ultimately be incorporated into FETCH'S extensive automated verification and validation (V and V) test suite expanding FETCH'S area of applicability to include all relevant physics associated with AHRs. These efforts parallel B and W's engineering effort to design and optimize an AHR to produce Mo99. (authors)

  4. Multi-dimensional tensor-based adaptive filter (TBAF) for low dose x-ray CT

    NASA Astrophysics Data System (ADS)

    Knaup, Michael; Lebedev, Sergej; Sawall, Stefan; Kachelrie, Marc

    2015-03-01

    Edge-preserving adaptive filtering within CT image reconstruction is a powerful method to reduce image noise and hence to reduce patient dose. However, highly sophisticated adaptive filters typically comprise many parameters which must be adjusted carefully in order to obtain optimal filter performance and to avoid artifacts caused by the filter. In this work we applied an anisotropic tensor-based adaptive image filter (TBAF) to CT image reconstruction, both as an image-based post-processing step, as well as a regularization step within an iterative reconstruction. The TBAF is a generalization of the filter of reference.1 Provided that the image noise (i.e. the variance) of the original image is known for each voxel, we adjust all filter parameters automatically. Hence, the TBAF can be applied to any individual CT dataset without user interaction. This is a crucial feature for a possible application in clinical routine. The TBAF is compared to a well-established adaptive bilateral filter using the same noise adjustment. Although the differences between both filters are subtle, edges and local structures emerge more clearly in the TBAF filtered images while anatomical details are less affected than by the bilateral filter.

  5. Off-line determination of the optimal number of iterations of the robust anisotropic diffusion filter applied to denoising of brain MR images.

    PubMed

    Ferrari, Ricardo J

    2013-02-01

    Although anisotropic diffusion filters have been used extensively and with great success in medical image denoising, one limitation of this iterative approach, when used on fully automatic medical image processing schemes, is that the quality of the resulting denoised image is highly dependent on the number of iterations of the algorithm. Using many iterations may excessively blur the edges of the anatomical structures, while a few may not be enough to remove the undesirable noise. In this work, a mathematical model is proposed to automatically determine the number of iterations of the robust anisotropic diffusion filter applied to the problem of denoising three common human brain magnetic resonance (MR) images (T1-weighted, T2-weighted and proton density). The model is determined off-line by means of the maximization of the mean structural similarity index, which is used in this work as metric for quantitative assessment of the resulting processed images obtained after each iteration of the algorithm. After determining the model parameters, the optimal number of iterations of the algorithm is easily determined without requiring any extra computation time. The proposed method was tested on 3D synthetic and clinical human brain MR images and the results of qualitative and quantitative evaluation have shown its effectiveness. PMID:23124813

  6. Optimization and kinetic modeling of esterification of the oil obtained from waste plum stones as a pretreatment step in biodiesel production.

    PubMed

    Kosti?, Milan D; Veli?kovi?, Ana V; Jokovi?, Nataa M; Stamenkovi?, Olivera S; Veljkovi?, Vlada B

    2016-02-01

    This study reports on the use of oil obtained from waste plum stones as a low-cost feedstock for biodiesel production. Because of high free fatty acid (FFA) level (15.8%), the oil was processed through the two-step process including esterification of FFA and methanolysis of the esterified oil catalyzed by H2SO4 and CaO, respectively. Esterification was optimized by response surface methodology combined with a central composite design. The second-order polynomial equation predicted the lowest acid value of 0.53mgKOH/g under the following optimal reaction conditions: the methanol:oil molar ratio of 8.5:1, the catalyst amount of 2% and the reaction temperature of 45C. The predicted acid value agreed with the experimental acid value (0.47mgKOH/g). The kinetics of FFA esterification was described by the irreversible pseudo first-order reaction rate law. The apparent kinetic constant was correlated with the initial methanol and catalyst concentrations and reaction temperature. The activation energy of the esterification reaction slightly decreased from 13.23 to 11.55kJ/mol with increasing the catalyst concentration from 0.049 to 0.172mol/dm(3). In the second step, the esterified oil reacted with methanol (methanol:oil molar ratio of 9:1) in the presence of CaO (5% to the oil mass) at 60C. The properties of the obtained biodiesel were within the EN 14214 standard limits. Hence, waste plum stones might be valuable raw material for obtaining fatty oil for the use as alternative feedstock in biodiesel production. PMID:26706748

  7. An IIR median hybrid filter

    NASA Technical Reports Server (NTRS)

    Bauer, Peter H.; Sartori, Michael A.; Bryden, Timothy M.

    1992-01-01

    A new class of nonlinear filters, the so-called class of multidirectional infinite impulse response median hybrid filters, is presented and analyzed. The input signal is processed twice using a linear shift-invariant infinite impulse response filtering module: once with normal causality and a second time with inverted causality. The final output of the MIMH filter is the median of the two-directional outputs and the original input signal. Thus, the MIMH filter is a concatenation of linear filtering and nonlinear filtering (a median filtering module). Because of this unique scheme, the MIMH filter possesses many desirable properties which are both proven and analyzed (including impulse removal, step preservation, and noise suppression). A comparison to other existing median type filters is also provided.

  8. Relevance of a full-length genomic RNA standard and a thermal-shock step for optimal hepatitis delta virus quantification.

    PubMed

    Homs, Maria; Giersch, Katja; Blasi, Maria; Ltgehetmann, Marc; Buti, Maria; Esteban, Rafael; Dandri, Maura; Rodriguez-Frias, Francisco

    2014-09-01

    Hepatitis D virus (HDV) is a defective RNA virus that requires the surface antigens of hepatitis B virus (HBV) (HBsAg) for viral assembly and replication. Several commercial and in-house techniques have been described for HDV RNA quantification, but the methodologies differ widely, making a comparison of the results between studies difficult. In this study, a full-length genomic RNA standard was developed and used for HDV quantification by two different real-time PCR approaches (fluorescence resonance energy transfer [FRET] and TaqMan probes). Three experiments were performed. First, the stability of the standard was determined by analyzing the effect of thawing and freezing. Second, because of the strong internal base pairing of the HDV genome, which leads to a rod-like structure, the effect of intense thermal shock (95C for 10 min and immediate cooling to -80C) was tested to confirm the importance of this treatment in the reverse transcription step. Lastly, to investigate the differences between the DNA and RNA standards, the two types were quantified in parallel with the following results: the full-length genomic RNA standard was stable and reliably mimicked the behavior of HDV-RNA-positive samples, thermal shock enhanced the sensitivity of HDV RNA quantification, and the DNA standard underquantified the HDV RNA standard. These findings indicate the importance of using complete full-length genomic RNA and a strong thermal-shock step for optimal HDV RNA quantification. PMID:24989607

  9. Optimization of the performance of a thermophilic biotrickling filter for alpha-pinene removal from polluted air.

    PubMed

    Montes, M; Veiga, M C; Kennes, C

    2014-01-01

    Biodegradation of alpha-pinene was investigated in a biological thermophilic trickling filter, using a lava rock and polymer beads mixture as packing material. Partition coefficient (PC) between alpha-pinene and the polymeric material (Hytrel G3548 L) was measured at 50 degrees C. PCs of 57 and 846 were obtained between the polymer and either the water or the gas phase, respectively. BTF experiments were conducted under continuous load feeding. The effect of yeast extract (YE) addition in the recirculating nutrient medium was evaluated. There was a positive relationship between alpha-pinene biodegradation, CO2 production and YE addition. A maximum elimination capacity (ECmax) of 98.9 g m(-3) h(-1) was obtained for an alpha-pinene loading rate of about 121 g m(-3) h(-1) in the presence of 1 g L(-1) YE. The ECmax was reduced by half in the absence of YE. It was also found that a decrease in the liquid flow rate enhances alpha-pinene biodegradation by increasing the ECmax up to 103 gm(-3) h(-1) with a removal efficiency close to 90%. The impact of short-term shock-loads (6 h) was tested under different process conditions. Increasing the pollutant load either 10- or 20-fold resulted in a sudden drop in the BTF's removal capacity, although this effect was attenuated in the presence of YE. PMID:25145201

  10. Systematic Biological Filter Design with a Desired I/O Filtering Response Based on Promoter-RBS Libraries.

    PubMed

    Hsu, Chih-Yuan; Pan, Zhen-Ming; Hu, Rei-Hsing; Chang, Chih-Chun; Cheng, Hsiao-Chun; Lin, Che; Chen, Bor-Sen

    2015-01-01

    In this study, robust biological filters with an external control to match a desired input/output (I/O) filtering response are engineered based on the well-characterized promoter-RBS libraries and a cascade gene circuit topology. In the field of synthetic biology, the biological filter system serves as a powerful detector or sensor to sense different molecular signals and produces a specific output response only if the concentration of the input molecular signal is higher or lower than a specified threshold. The proposed systematic design method of robust biological filters is summarized into three steps. Firstly, several well-characterized promoter-RBS libraries are established for biological filter design by identifying and collecting the quantitative and qualitative characteristics of their promoter-RBS components via nonlinear parameter estimation method. Then, the topology of synthetic biological filter is decomposed into three cascade gene regulatory modules, and an appropriate promoter-RBS library is selected for each module to achieve the desired I/O specification of a biological filter. Finally, based on the proposed systematic method, a robust externally tunable biological filter is engineered by searching the promoter-RBS component libraries and a control inducer concentration library to achieve the optimal reference match for the specified I/O filtering response. PMID:26357282

  11. Optimization of an analytical methodology for the simultaneous determination of different classes of ultraviolet filters in cosmetics by pressurized liquid extraction-gas chromatography tandem mass spectrometry.

    PubMed

    Vila, Marlene; Lamas, J Pablo; Garcia-Jares, Carmen; Dagnac, Thierry; Llompart, Maria

    2015-07-31

    A methodology based on pressurized liquid extraction (PLE) followed by gas chromatography-tandem mass spectrometry (GC-MS/MS) has been developed for the simultaneous analysis of different classes of UV filters including methoxycinnamates, benzophenones, salicylates, p-aminobenzoic acid derivatives, and others in cosmetic products. The extractions were carried out in 1mL extraction cells and the amount of sample extracted was only 100mg. The experimental conditions, including the acetylation of the PLE extracts to improve GC performance, were optimized by means of experimental design tools. The two main factors affecting the PLE procedure such as solvent type and extraction temperature were assessed. The use of a matrix matched approach consisting of the addition of 10μL of diluted commercial cosmetic oil avoided matrix effects. Good linearity (R(2)>0.9970), quantitative recoveries (>80% for most of compounds, excluding three banned benzophenones) and satisfactory precision (RSD<10% in most cases) were achieved under the optimal conditions. The validated methodology was successfully applied to the analysis of different types of cosmetic formulations including sunscreens, hair products, nail polish, and lipsticks, amongst others. PMID:26091782

  12. A Kalman filter for a two-dimensional shallow-water model

    NASA Technical Reports Server (NTRS)

    Parrish, D. F.; Cohn, S. E.

    1985-01-01

    A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.

  13. [Application of N-isopropyl-p-[123I] iodoamphetamine quantification of regional cerebral blood flow using iterative reconstruction methods: selection of the optimal reconstruction method and optimization of the cutoff frequency of the preprocessing filter].

    PubMed

    Asazu, Akira; Hayashi, Masuo; Arai, Mami; Kumai, Yoshiaki; Akagi, Hiroyuki; Okayama, Katsuyoshi; Narumi, Yoshifumi

    2013-05-01

    In cerebral blood flow tests using N-Isopropyl-p-[123I] Iodoamphetamine "I-IMP, quantitative results of greater accuracy than possible using the autoradiography (ARG) method can be obtained with attenuation and scatter correction and image reconstruction by filtered back projection (FBP). However, the cutoff frequency of the preprocessing Butterworth filter affects the quantitative value; hence, we sought an optimal cutoff frequency, derived from the correlation between the FBP method and Xenon-enhanced computed tomography (XeCT)/cerebral blood flow (CBF). In this study, we reconstructed images using ordered subsets expectation maximization (OSEM), a method of successive approximation which has recently come into wide use, and also three-dimensional (3D)-OSEM, a method by which the resolution can be corrected with the addition of collimator broad correction, to examine the effects on the regional cerebral blood flow (rCBF) quantitative value of changing the cutoff frequency, and to determine whether successive approximation is applicable to cerebral blood flow quantification. Our results showed that quantification of greater accuracy was obtained with reconstruction employing the 3D-OSEM method and using a cutoff frequency set near 0.75-0.85 cycles/cm, which is higher than the frequency used in image reconstruction by the ordinary FBP method. PMID:23964534

  14. Modeling and optimization of ultrasound-assisted extraction of polyphenolic compounds from Aronia melanocarpa by-products from filter-tea factory.

    PubMed

    Rami?, Milica; Vidovi?, Senka; Zekovi?, Zoran; Vladi?, Jelena; Cvejin, Aleksandra; Pavli?, Branimir

    2015-03-01

    Aronia melanocarpa by-product from filter-tea factory was used for the preparation of extracts with high content of bioactive compounds. Extraction process was accelerated using sonication. Three level, three variable face-centered cubic experimental design (FCD) with response surface methodology (RSM) was used for optimization of extraction in terms of maximized yields for total phenolics (TP), flavonoids (TF), anthocyanins (MA) and proanthocyanidins (TPA) contents. Ultrasonic power (X?: 72-216 W), temperature (X?: 30-70 C) and extraction time (X?: 30-90 min) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions for investigated responses. Three-dimensional surface plots were generated from the mathematical models. The optimal conditions for ultrasound-assisted extraction of TP, TF, MA and TPA were: X?=206.64 W, X?=70 C, X?=80.1 min; X?=210.24 W, X?=70 C, X?=75 min; X?=216 W, X?=70 C, X?=45.6 min and X?=199.44 W, X?=70 C, X?=89.7 min, respectively. Generated model predicted values of the TP, TF, MA and TPA to be 15.41 mg GAE/ml, 9.86 mg CE/ml, 2.26 mg C3G/ml and 20.67 mg CE/ml, respectively. Experimental validation was performed and close agreement between experimental and predicted values was found (within 95% confidence interval). PMID:25454824

  15. Estimation and filter stability of stochastic delay systems

    NASA Technical Reports Server (NTRS)

    Kwong, R. H.; Willsky, A. S.

    1978-01-01

    Linear and nonlinear filtering for stochastic delay systems are studied. A representation theorem for conditional moment functionals is obtained, which, in turn, is used to derive stochastic differential equations describing the optimal linear or nonlinear filter. A complete characterization of the optimal filter is given for linear systems with Gaussian noise. Stability of the optimal filter is studied in the case where there are no delays in the observations. Using the duality between linear filtering and control, asymptotic stability of the optimal filter is proved. Finally, the cascade of the optimal filter and the deterministic optimal quadratic control system is shown to be asymptotically stable as well.

  16. Next Step for STEP

    SciTech Connect

    Wood, Claire; Bremner, Brenda

    2013-08-09

    The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.

  17. Scale-up and optimization of an acoustic filter for 200 L/day perfusion of a CHO cell culture.

    PubMed

    Gorenflo, Volker M; Smith, Laura; Dedinsky, Bob; Persson, Bo; Piret, James M

    2002-11-20

    Acoustic cell retention devices have provided a practical alternative for up to 50 L/day perfusion cultures but further scale-up has been limited. A novel temperature-controlled and larger-scale acoustic separator was evaluated at up to 400 L/day for a 10(7) CHO cell/mL perfusion culture using a 100-L bioreactor that produced up to 34 g/day recombinant protein. The increased active volume of this scaled-up separator was divided into four parallel compartments for improved fluid dynamics. Operational settings of the acoustic separator were optimized and the limits of robust operations explored. The performance was not influenced over wide ranges of duty cycle stop and run times. The maximum performance of 96% separation efficiency at 200 L/day was obtained by setting the separator temperature to 35.1 degrees C, the recirculation rate to three times the harvest rate, and the power to 90 W. While there was no detectable effect on culture viability, viable cells were selectively retained, especially at 50 L/day, where there was a 5-fold higher nonviable washout efficiency. Overall, the new temperature-controlled and scaled-up separator design performed reliably in a way similar to smaller-scale acoustic separators. These results provide strong support for the feasibility of much greater scale-up of acoustic separations. PMID:12325152

  18. Synthesis and optimization of wide pore superficially porous particles by a one-step coating process for separation of proteins and monoclonal antibodies.

    PubMed

    Chen, Wu; Jiang, Kunqiang; Mack, Anne; Sachok, Bo; Zhu, Xin; Barber, William E; Wang, Xiaoli

    2015-10-01

    Superficially porous particles (SPPs) with pore size ranging from 90Å to 120Å have been a great success for the fast separation of small molecules over totally porous particles in recent years. However, for the separation of large biomolecules such as proteins, particles with large pore size (e.g. ≥ 300Å) are needed to allow unrestricted diffusion inside the pores. One early example is the commercial wide pore (300Å) SPPs in 5μm size introduced in 2001. More recently, wide pore SPPs (200Å and 400Å) in smaller particle sizes (3.5-3.6μm) have been developed to meet the need of increasing interest in doing faster analysis of larger therapeutic molecules by biopharmaceutical companies. Those SSPs in the market are mostly synthesized by the laborious layer-by-layer (LBL) method. A one step coating approach would be highly advantageous, offering potential benefits on process time, easier quality control, materials cost, and process simplicity for facile scale-up. A unique one-step coating process for the synthesis of SPPs called the "coacervation method" was developed by Chen and Wei as an improved and optimized process, and has been successfully applied to synthesis of a commercial product, Poroshell 120 particles, for small molecule separation. In this report, we would like to report on the most recent development of the one step coating coacervation method for the synthesis of a series of wide pore SPPs of different particle size, pore size, and shell thickness. The one step coating coacervation method was proven to be a universal method to synthesize SPPs of any particle size and pore size. The effects of pore size (300Å vs. 450Å), shell thickness (0.25μm vs. 0.50μm), and particle size (2.7μm and 3.5μm) on the separation of large proteins, intact and fragmented monoclonal antibodies (mAbs) were studied. Van Deemter studies using proteins were also conducted to compare the mass transfer properties of these particles. It was found that the larger pore size actually had more impact on the performance of mAbs than particle size and shell thickness. The SPPs with larger 3.5μm particle size and larger 450Å pore size showed the best resolution of mAbs and the lowest back pressure. To the best of our knowledge, this is the largest pore size made on SPPs. These results led to the optimal particle design with a particle size of 3.5μm, a thin shell of 0.25μm and a larger pore size of 450Å. PMID:26342871

  19. Utilization of optimized BCR three-step sequential and dilute HCl single extraction procedures for soil-plant metal transfer predictions in contaminated lands.

    PubMed

    Kubov, Jana; Mats, Peter; Bujdos, Marek; Hagarov, Ingrid; Medved', Jn

    2008-05-30

    The prediction of soil metal phytoavailability using the chemical extractions is a conventional approach routinely used in soil testing. The adequacy of such soil tests for this purpose is commonly assessed through a comparison of extraction results with metal contents in relevant plants. In this work, the fractions of selected risk metals (Al, As, Cd, Cu, Fe, Mn, Ni, Pb, Zn) that can be taken up by various plants were obtained by optimized BCR (Community Bureau of Reference) three-step sequential extraction procedure (SEP) and by single 0.5 mol L(-1) HCl extraction. These procedures were validated using five soil and sediment reference materials (SRM 2710, SRM 2711, CRM 483, CRM 701, SRM RTH 912) and applied to significantly different acidified soils for the fractionation of studied metals. The new indicative values of Al, Cd, Cu, Fe, Mn, P, Pb and Zn fractional concentrations for these reference materials were obtained by the dilute HCl single extraction. The influence of various soil genesis, content of essential elements (Ca, Mg, K, P) and different anthropogenic sources of acidification on extraction yields of individual risk metal fractions was investigated. The concentrations of studied elements were determined by atomic spectrometry methods (flame, graphite furnace and hydride generation atomic absorption spectrometry and inductively coupled plasma optical emission spectrometry). It can be concluded that the data of extraction yields from first BCR SEP acid extractable step and soil-plant transfer coefficients can be applied to the prediction of qualitative mobility of selected risk metals in different soil systems. PMID:18585191

  20. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip

    PubMed Central

    Davison, James A

    2015-01-01

    Purpose To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. Patients and methods A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Results Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. Conclusion A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer. PMID:26355695

  1. Optimization of three operating parameters for a two-step fed sequencing batch reactor (SBR) system to remove nutrients from swine wastewater.

    PubMed

    Wu, Xiao; Zhu, Jun; Cheng, Jiehong; Zhu, Nanwen

    2015-03-01

    In this study, the effect of three operating parameters, i.e., the first/second volumetric feeding ratio (milliliters/milliliters), the first anaerobic/aerobic (an/oxic) time ratio (minute/minute), and the second an/oxic time ratio (minute/minute), on the performance of a two-step fed sequencing batch reactor (SBR) system to treat swine wastewater for nutrients removal was examined. Central Composite Design, coupled with Response Surface Methodology, was employed to test these parameters at five levels in order to optimize the SBR to achieve the best removal efficiencies for six response variables including total nitrogen (TN), ammonium nitrogen (NH4-N), total phosphorus (TP), dissolved phosphorus (DP), chemical oxygen demand (COD), and biochemical oxygen demand (BOD). The results showed that the three parameters investigated had significant impact on all the response variables (TN, NH4-N, TP, DP, COD, and BOD), although the highest removal efficiency for each individual responses was associated with different combination of the three parameters. The maximum TN, NH4-N, TP, DP, COD, and BOD removal efficiencies of 96.38%, 95.38%, 93.62%, 94.3%, 95.26%, and 92.84% were obtained at the optimal first/second volumetric feeding ratio, first an/oxic time ratio, and second an/oxic time ratio of 3.23, 0.4, and 0.8 for TN; 2.64, 0.72, and 0.76 for NH4-N; 3.08, 1.16, and 1.07 for TP; 1.32, 0.81, and 1.0 for DP; 2.57, 0.96, and 1.12 for COD; and 1.62, 0.64, and 1.61 for BOD, respectively. Good linear relationships between the predicted and observed results for all the response variables were observed. PMID:25564205

  2. Optimal State Estimation for Cavity Optomechanical Systems.

    PubMed

    Wieczorek, Witlef; Hofer, Sebastian G; Hoelscher-Obermaier, Jason; Riedinger, Ralf; Hammerer, Klemens; Aspelmeyer, Markus

    2015-06-01

    We demonstrate optimal state estimation for a cavity optomechanical system through Kalman filtering. By taking into account nontrivial experimental noise sources, such as colored laser noise and spurious mechanical modes, we implement a realistic state-space model. This allows us to obtain the conditional system state, i.e., conditioned on previous measurements, with a minimal least-squares estimation error. We apply this method to estimate the mechanical state, as well as optomechanical correlations both in the weak and strong coupling regime. The application of the Kalman filter is an important next step for achieving real-time optimal (classical and quantum) control of cavity optomechanical systems. PMID:26196621

  3. Optimal State Estimation for Cavity Optomechanical Systems

    NASA Astrophysics Data System (ADS)

    Wieczorek, Witlef; Hofer, Sebastian G.; Hoelscher-Obermaier, Jason; Riedinger, Ralf; Hammerer, Klemens; Aspelmeyer, Markus

    2015-06-01

    We demonstrate optimal state estimation for a cavity optomechanical system through Kalman filtering. By taking into account nontrivial experimental noise sources, such as colored laser noise and spurious mechanical modes, we implement a realistic state-space model. This allows us to obtain the conditional system state, i.e., conditioned on previous measurements, with a minimal least-squares estimation error. We apply this method to estimate the mechanical state, as well as optomechanical correlations both in the weak and strong coupling regime. The application of the Kalman filter is an important next step for achieving real-time optimal (classical and quantum) control of cavity optomechanical systems.

  4. Analysis of plasticizers in poly(vinyl chloride) medical devices for infusion and artificial nutrition: comparison and optimization of the extraction procedures, a pre-migration test step.

    PubMed

    Bernard, Lise; Cueff, Rgis; Bourdeaux, Daniel; Breysse, Colette; Sautou, Valrie

    2015-02-01

    Medical devices (MDs) for infusion and enteral and parenteral nutrition are essentially made of plasticized polyvinyl chloride (PVC). The first step in assessing patient exposure to these plasticizers, as well as ensuring that the MDs are free from di(2-ethylhexyl) phthalate (DEHP), consists of identifying and quantifying the plasticizers present and, consequently, determining which ones are likely to migrate into the patient's body. We compared three different extraction methods using 0.1 g of plasticized PVC: Soxhlet extraction in diethyl ether and ethyl acetate, polymer dissolution, and room temperature extraction in different solvents. It was found that simple room temperature chloroform extraction under optimized conditions (30 min, 50 mL) gave the best separation of plasticizers from the PVC matrix, with extraction yields ranging from 92 to 100% for all plasticizers. This result was confirmed by supplemented Fourier transform infrared spectroscopy-attenuated total reflection (FTIR-ATR) and gravimetric analyses. The technique was used on eight marketed medical devices and showed that they contained different amounts of plasticizers, ranging from 25 to 36% of the PVC weight. These yields, associated with the individual physicochemical properties of each plasticizer, highlight the need for further migration studies. PMID:25577357

  5. Disk filter

    DOEpatents

    Bergman, W.

    1985-01-09

    An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.

  6. Disk filter

    DOEpatents

    Bergman, Werner (Pleasanton, CA)

    1986-01-01

    An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.

  7. NIRCam filter wheels

    NASA Astrophysics Data System (ADS)

    McCully, Sean; Schermerhorn, Michael; Thatcher, John

    2005-08-01

    The NIRCam instrument will provide near-infrared imaging capabilities for the James Webb Space Telescope. In addition, this instrument contains the wavefront-sensing elements necessary for optimizing the performance of the primary mirror. Several of these wavefront-sensing elements will reside in the NIRCam Filter Wheel Assembly. The instrument and its complement of mechanisms and optics will operate at a cryogenic temperature of 35K. This paper describes the design of the NIRCam Filter Wheel Assembly.

  8. High accuracy motor controller for positioning optical filters in the CLAES Spectrometer

    NASA Technical Reports Server (NTRS)

    Thatcher, John B.

    1989-01-01

    The Etalon Drive Motor (EDM), a precision etalon control system designed for accurate positioning of etalon filters in the IR spectrometer of the Cryogenic Limb Array Etalon Spectrometer (CLAES) experiment is described. The EDM includes a brushless dc torque motor, which has an infinite resolution for setting an etalon filter to any desired angle, a four-filter etalon wheel, and an electromechanical resolver for angle information. An 18-bit control loop provides high accuracy, resolution, and stability. Dynamic computer interaction allows the user to optimize the step response. A block diagram of the motor controller is presented along with a schematic of the digital/analog converter circuit.

  9. Water Filters

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Aquaspace H2OME Guardian Water Filter, available through Western Water International, Inc., reduces lead in water supplies. The filter is mounted on the faucet and the filter cartridge is placed in the "dead space" between sink and wall. This filter is one of several new filtration devices using the Aquaspace compound filter media, which combines company developed and NASA technology. Aquaspace filters are used in industrial, commercial, residential, and recreational environments as well as by developing nations where water is highly contaminated.

  10. Biological Filters.

    ERIC Educational Resources Information Center

    Klemetson, S. L.

    1978-01-01

    Presents the 1978 literature review of wastewater treatment. The review is concerned with biological filters, and it covers: (1) trickling filters; (2) rotating biological contractors; and (3) miscellaneous reactors. A list of 14 references is also presented. (HM)

  11. Metallic Filters

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Filtration technology originated in a mid 1960's NASA study. The results were distributed to the filter industry, an HR Textron responded, using the study as a departure for the development of 421 Filter Media. The HR system is composed of ultrafine steel fibers metallurgically bonded and compressed so that the pore structure is locked in place. The filters are used to filter polyesters, plastics, to remove hydrocarbon streams, etc. Several major companies use the product in chemical applications, pollution control, etc.

  12. Game-theoretic Kalman Filter

    NASA Astrophysics Data System (ADS)

    Colburn, Christopher; Bewley, Thomas

    2010-11-01

    The Kalman Filter (KF) is celebrated as the optimal estimator for systems with linear dynamics and gaussian uncertainty. Although most systems of interest do not have linear dynamics and are not forced by gaussian noise, the KF is used ubiquitously within industry. Thus, we present a novel estimation algorithm, the Game-theoretic Kalman Filter (GKF), which intelligently hedges between competing sequential filters and does not require the assumption of gaussian statistics to provide a "best" estimate.

  13. High-resolution wave-theory-based ultrasound reflection imaging using the split-step fourier and globally optimized fourier finite-difference methods

    DOEpatents

    Huang, Lianjie

    2013-10-29

    Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Results from various data input to the method indicate significant improvements are provided in both image quality and resolution.

  14. Filtering apparatus

    DOEpatents

    Haldipur, G.B.; Dilmore, W.J.

    1992-09-01

    A vertical vessel is described having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas. 18 figs.

  15. Filtering apparatus

    DOEpatents

    Haldipur, Gaurang B. (Monroeville, PA); Dilmore, William J. (Murrysville, PA)

    1992-01-01

    A vertical vessel having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas.

  16. A step-by-step guide to systematically identify all relevant animal studies

    PubMed Central

    Leenaars, Marlies; Hooijmans, Carlijn R; van Veggel, Nieky; ter Riet, Gerben; Leeflang, Mariska; Hooft, Lotty; van der Wilt, Gert Jan; Tillema, Alice; Ritskes-Hoitinga, Merel

    2012-01-01

    Before starting a new animal experiment, thorough analysis of previously performed experiments is essential from a scientific as well as from an ethical point of view. The method that is most suitable to carry out such a thorough analysis of the literature is a systematic review (SR). An essential first step in an SR is to search and find all potentially relevant studies. It is important to include all available evidence in an SR to minimize bias and reduce hampered interpretation of experimental outcomes. Despite the recent development of search filters to find animal studies in PubMed and EMBASE, searching for all available animal studies remains a challenge. Available guidelines from the clinical field cannot be copied directly to the situation within animal research, and although there are plenty of books and courses on searching the literature, there is no compact guide available to search and find relevant animal studies. Therefore, in order to facilitate a structured, thorough and transparent search for animal studies (in both preclinical and fundamental science), an easy-to-use, step-by-step guide was prepared and optimized using feedback from scientists in the field of animal experimentation. The step-by-step guide will assist scientists in performing a comprehensive literature search and, consequently, improve the scientific quality of the resulting review and prevent unnecessary animal use in the future. PMID:22037056

  17. Kaon Filtering For CLAS Data

    SciTech Connect

    McNabb, J.

    2001-01-30

    The analysis of data from CLAS is a multi-step process. After the detectors for a given running period have been calibrated, the data is processed in the so called pass-1 cooking. During the pass-1 cooking each event is reconstructed by the program a1c which finds particle tracks and computes momenta from the raw data. The results are then passed on to several data monitoring and filtering utilities. In CLAS software, a filter is a parameterless function which returns an integer indicating whether an event should be kept by that filter or not. There is a main filter program called g1-filter which controls several specific filters and outputs several files, one for each filter. These files may then be analyzed separately, allowing individuals interested in one reaction channel to work from smaller files than using the whole data set would require. There are several constraints on what the filter functions should do. Obviously, the filtered files should be as small as possible, however the filter should also not reject any events that might be used in the later analysis for which the filter was intended.

  18. Comparison of spatial domain optimal trade-off maximum average correlation height (OT-MACH) filter with scale invariant feature transform (SIFT) using images with poor contrast and large illumination gradient

    NASA Astrophysics Data System (ADS)

    Gardezi, A.; Qureshi, T.; Alkandri, A.; Young, R. C. D.; Birch, P. M.; Chatwin, C. R.

    2015-03-01

    A spatial domain optimal trade-off Maximum Average Correlation Height (OT-MACH) filter has been previously developed and shown to have advantages over frequency domain implementations in that it can be made locally adaptive to spatial variations in the input image background clutter and normalised for local intensity changes. In this paper we compare the performance of the spatial domain (SPOT-MACH) filter to the widely applied data driven technique known as the Scale Invariant Feature Transform (SIFT). The SPOT-MACH filter is shown to provide more robust recognition performance than the SIFT technique for demanding images such as scenes in which there are large illumination gradients. The SIFT method depends on reliable local edge-based feature detection over large regions of the image plane which is compromised in some of the demanding images we examined for this work. The disadvantage of the SPOTMACH filter is its numerically intensive nature since it is template based and is implemented in the spatial domain.

  19. Hot-gas filter manufacturing assessments: Volume 5. Final report, April 15, 1997

    SciTech Connect

    Boss, D.E.

    1997-12-31

    The development of advanced filtration media for advanced fossil-fueled power generating systems is a critical step in meeting the performance and emissions requirements for these systems. While porous metal and ceramic candle-filters have been available for some time, the next generation of filters will include ceramic-matrix composites (CMCs), intermetallic alloys, and alternate filter geometries. The goal of this effort was to perform a cursory review of the manufacturing processes used by 5 companies developing advanced filters from the perspective of process repeatability and the ability for their processes to be scale-up to production volumes. It was found that all of the filter manufacturers had a solid understanding of the product development path. Given that these filters are largely developmental, significant additional work is necessary to understand the process-performance relationships and projecting manufacturing costs. While each organization had specific needs, some common among all of the filter manufacturers were access to performance testing of the filters to aide process/product development, a better understanding of the stresses the filters will see in service for use in structural design of the components, and a strong process sensitivity study to allow optimization of processing.

  20. Step Pultrusion

    NASA Astrophysics Data System (ADS)

    Langella, A.; Carbone, R.; Durante, M.

    2012-12-01

    The pultrusion process is an efficient technology for the production of composite material profiles. Thanks to this positive feature, several studies have been carried out, either to expand the range of products made using the pultrusion technology, or improve its already high production rate. This study presents a process derived from the traditional pultrusion technology named "Step Pultrusion Process Technology" (SPPT). Using the step pultrusion process, the final section of the composite profiles is obtainable by means of a progressive cross section increasing through several resin cure stations. This progressive increasing of the composite cross section means that a higher degree of cure level can be attained at the die exit point of the last die. Mechanical test results of the manufactured pultruded samples have been used to compare both the traditional and the step pultrusion processes. Finally, there is a discussion on ways to improve the new step pultrusion process even further.

  1. Stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  2. PWM control techniques for rectifier filter minimization

    SciTech Connect

    Ziogas, P.D.; Kang, Y-G; Stefanovic, V.R.

    1985-09-01

    Minimization of input/output filters is an essential step towards manufacturing compact low-cost static power supplies. Three PWM control techniques that yield substantial filter size reduction for three-phase (self-commutated) rectifiers are presented and analyzed. Filters required by typical line-commutated rectifiers are used as the basis for comparison. Moreover, it is shown that in addition to filter minimization two of the proposed three control techniques improve substantially the rectifier total input power factor.

  3. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  4. Microwave assisted biodiesel production from Jatropha curcas L. seed by two-step in situ process: optimization using response surface methodology.

    PubMed

    Jaliliannosrati, Hamidreza; Amin, Nor Aishah Saidina; Talebian-Kiakalaieh, Amin; Noshadi, Iman

    2013-05-01

    The synthesis of fatty acid ethyl esters (FAEEs) by a two-step in situ (reactive) esterification/transesterification from Jatropha curcas L. (JCL) seeds using microwave system has been investigated. Free fatty acid was reduced from 14% to less than 1% in the first step using H2SO4 as acid catalyst after 35 min of microwave irradiation heating. The organic phase in the first step was subjected to a second reaction by adding 5 N KOH in ethanol as the basic catalyst. Response surface methodology (RSM) based on central composite design (CCD) was utilized to design the experiments and analyze the influence of process variables (particles seed size, time of irradiation, agitation speed and catalyst loading) on conversion of triglycerides (TGs) in the second step. The highest triglycerides conversion to fatty acid ethyl esters (FAEEs) was 97.29% at the optimum conditions:<0.5mm seed size, 12.21 min irradiation time, 8.15 ml KOH catalyst loading and 331.52 rpm agitation speed in the 110 W microwave power system. PMID:23567732

  5. Optimization of Xylanase Production from Penicillium sp.WX-Z1 by a Two-Step Statistical Strategy: Plackett-Burman and Box-Behnken Experimental Design

    PubMed Central

    Cui, Fengjie; Zhao, Liming

    2012-01-01

    The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO3, MgSO4, and CaCl2. The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO3, 12.71; MgSO4, 0.96; and CaCl2, 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor. PMID:22949884

  6. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  7. Influence of multi-step heat treatments in creep age forming of 7075 aluminum alloy: Optimization for springback, strength and exfoliation corrosion

    SciTech Connect

    Arabi Jeshvaghani, R.; Zohdi, H.; Shahverdi, H.R.; Bozorg, M.; Hadavi, S.M.M.

    2012-11-15

    Multi-step heat treatments comprise of high temperature forming (150 Degree-Sign C/24 h plus 190 Degree-Sign C for several minutes) and subsequent low temperature forming (120 Degree-Sign C for 24 h) is developed in creep age forming of 7075 aluminum alloy to decrease springback and exfoliation corrosion susceptibility without reduction in tensile properties. The results show that the multi-step heat treatment gives the low springback and the best combination of exfoliation corrosion resistance and tensile strength. The lower springback is attributed to the dislocation recovery and more stress relaxation at higher temperature. Transmission electron microscopy observations show that corrosion resistance is improved due to the enlargement in the size and the inter-particle distance of the grain boundaries precipitates. Furthermore, the achievement of the high strength is related to the uniform distribution of ultrafine {eta} Prime precipitates within grains. - Highlights: Black-Right-Pointing-Pointer Creep age forming developed for manufacturing of aircraft wing panels by aluminum alloy. Black-Right-Pointing-Pointer A good combination of properties with minimal springback is required in this component. Black-Right-Pointing-Pointer This requirement can be improved through the appropriate heat treatments. Black-Right-Pointing-Pointer Multi-step cycles developed in creep age forming of AA7075 for improving of springback and properties. Black-Right-Pointing-Pointer Results indicate simultaneous enhancing the properties and shape accuracy (lower springback).

  8. New filter efficiency test for future nuclear grade HEPA filters

    SciTech Connect

    Bergman, W.; Foiles, L.; Mariner, C.; Kincy, M.

    1988-08-17

    We have developed a new test procedure for evaluating filter penetrations as low as 10/sup /minus/9/ at 0.1-..mu..m particle diameter. In comparison, the present US nuclear filter certification test has a lower penetration limit of 10/sup /minus/5/. Our new test procedure is unique not only in its much higher sensitivity, but also in avoiding the undesirable effect of clogging the filter. Our new test procedure consists of a two-step process: (1) We challenge the test filter with a very high concentration of heterodisperse aerosol for a short time while passing all or a significant portion of the filtered exhaust into an inflatable bag; (2) We then measure the aerosol concentration in the bag using a new laser particle counter sensitive to 0.07-..mu..m diameter. The ratio of particle concentration in the bag to the concentration challenging the filter gives the filter penetration as a function of particle diameter. The bad functions as a particle accumulator for subsequent analysis to minimize the filter exposure time. We have studied the particle losses in the bag over time and find that they are negligible when the measurements are taken within one hour. We also compared filter penetration measurements taken in the conventional direct-sampling method with the indirect bag-sampling method and found excellent agreement. 6 refs., 18 figs., 1 tab.

  9. Aquatic Plants Aid Sewage Filter

    NASA Technical Reports Server (NTRS)

    Wolverton, B. C.

    1985-01-01

    Method of wastewater treatment combines micro-organisms and aquatic plant roots in filter bed. Treatment occurs as liquid flows up through system. Micro-organisms, attached themselves to rocky base material of filter, act in several steps to decompose organic matter in wastewater. Vascular aquatic plants (typically, reeds, rushes, cattails, or water hyacinths) absorb nitrogen, phosphorus, other nutrients, and heavy metals from water through finely divided roots.

  10. Modelling of diffraction grating based optical filters for fluorescence detection of biomolecules

    PubMed Central

    Kovačič, M.; Krč, J.; Lipovšek, B.; Topič, M.

    2014-01-01

    The detection of biomolecules based on fluorescence measurements is a powerful diagnostic tool for the acquisition of genetic, proteomic and cellular information. One key performance limiting factor remains the integrated optical filter, which is designed to reject strong excitation light while transmitting weak emission (fluorescent) light to the photodetector. Conventional filters have several disadvantages. For instance absorbing filters, like those made from amorphous silicon carbide, exhibit low rejection ratios, especially in the case of small Stokes’ shift fluorophores (e.g. green fluorescent protein GFP with λexc = 480 nm and λem = 510 nm), whereas interference filters comprising many layers require complex fabrication. This paper describes an alternative solution based on dielectric diffraction gratings. These filters are not only highly efficient but require a smaller number of manufacturing steps. Using FEM-based optical modelling as a design optimization tool, three filtering concepts are explored: (i) a diffraction grating fabricated on the surface of an absorbing filter, (ii) a diffraction grating embedded in a host material with a low refractive index, and (iii) a combination of an embedded grating and an absorbing filter. Both concepts involving an embedded grating show high rejection ratios (over 100,000) for the case of GFP, but also high sensitivity to manufacturing errors and variations in the incident angle of the excitation light. Despite this, simulations show that a 60 times improvement in the rejection ratio relative to a conventional flat absorbing filter can be obtained using an optimized embedded diffraction grating fabricated on top of an absorbing filter. PMID:25071964

  11. Optimization of pressurized liquid extraction using a multivariate chemometric approach and comparison of solid-phase extraction cleanup steps for the determination of polycyclic aromatic hydrocarbons in mosses.

    PubMed

    Foan, L; Simon, V

    2012-09-21

    A factorial design was used to optimize the extraction of polycyclic aromatic hydrocarbons (PAHs) from mosses, plants used as biomonitors of air pollution. The analytical procedure consists of pressurized liquid extraction (PLE) followed by solid-phase extraction (SPE) cleanup, in association with analysis by high performance liquid chromatography coupled with fluorescence detection (HPLC-FLD). For method development, homogeneous samples were prepared with large quantities of the mosses Isothecium myosuroides Brid. and Hypnum cupressiforme Hedw., collected from a Spanish Nature Reserve. A factorial design was used to identify the optimal PLE operational conditions: 2 static cycles of 5 min at 80 °C. The analytical procedure performed with PLE showed similar recoveries (∼70%) and total PAH concentrations (∼200 ng g(-1)) as found using Soxtec extraction, with the advantage of reducing solvent consumption by 3 (30 mL against 100mL per sample), and taking a fifth of the time (24 samples extracted automatically in 8h against 2 samples in 3.5h). The performance of SPE normal phases (NH(2), Florisil, silica and activated aluminium) generally used for organic matrix cleanup was also compared. Florisil appeared to be the most selective phase and ensured the highest PAH recoveries. The optimal analytical procedure was validated with a reference material and applied to moss samples from a remote Spanish site in order to determine spatial and inter-species variability. PMID:22885040

  12. Particle Kalman Filtering: A Nonlinear Framework for Ensemble Kalman Filters

    NASA Astrophysics Data System (ADS)

    Hoteit, Ibrahim; Luo, Xiaodong; Pham, Dinh-Tuan; Moroz, Irene M.

    2010-09-01

    Optimal nonlinear filtering consists of sequentially determining the conditional probability distribution functions (pdf) of the system state, given the information of the dynamical and measurement processes and the previous measurements. Once the pdfs are obtained, one can determine different estimates, for instance, the minimum variance estimate, or the maximum a posteriori estimate, of the system state. It can be shown that, many filters, including the Kalman filter (KF) and the particle filter (PF), can be derived based on this sequential Bayesian estimation framework. In this contribution, we present a Gaussian mixture-based framework, called the particle Kalman filter (PKF), and discuss how the different EnKF methods can be derived as simplified variants of the PKF. We also discuss approaches to reducing the computational burden of the PKF in order to make it suitable for complex geosciences applications. We use the strongly nonlinear Lorenz-96 model to illustrate the performance of the PKF.

  13. Water Filters

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Seeking to find a more effective method of filtering potable water that was highly contaminated, Mike Pedersen, founder of Western Water International, learned that NASA had conducted extensive research in methods of purifying water on board manned spacecraft. The key is Aquaspace Compound, a proprietary WWI formula that scientifically blends various types of glandular activated charcoal with other active and inert ingredients. Aquaspace systems remove some substances; chlorine, by atomic adsorption, other types of organic chemicals by mechanical filtration and still others by catalytic reaction. Aquaspace filters are finding wide acceptance in industrial, commercial, residential and recreational applications in the U.S. and abroad.

  14. Sigma Filter

    NASA Technical Reports Server (NTRS)

    Balgovind, R. C.

    1985-01-01

    The GLA Fourth-Order model is needed to smooth the topography. This is to remove the Gibbs phenomenon. The Gibbs phenomenon occurs whenever we truncate a Fourier Series. The Sigma factors were introduced to reduce the Gibbs phenomenon. It is found that the smooth Fourier series is nothing but the original Fourier series with its coefficients multiplied by corresponding sigma factors. This operator can be applied many times to obtain high order sigma filtered field and is easily applicable using FFT. It is found that this filter is beneficial in deriving the topography.

  15. Carbon nanotube filters

    NASA Astrophysics Data System (ADS)

    Srivastava, A.; Srivastava, O. N.; Talapatra, S.; Vajtai, R.; Ajayan, P. M.

    2004-09-01

    Over the past decade of nanotube research, a variety of organized nanotube architectures have been fabricated using chemical vapour deposition. The idea of using nanotube structures in separation technology has been proposed, but building macroscopic structures that have controlled geometric shapes, density and dimensions for specific applications still remains a challenge. Here we report the fabrication of freestanding monolithic uniform macroscopic hollow cylinders having radially aligned carbon nanotube walls, with diameters and lengths up to several centimetres. These cylindrical membranes are used as filters to demonstrate their utility in two important settings: the elimination of multiple components of heavy hydrocarbons from petroleum-a crucial step in post-distillation of crude oil-with a single-step filtering process, and the filtration of bacterial contaminants such as Escherichia coli or the nanometre-sized poliovirus (~25 nm) from water. These macro filters can be cleaned for repeated filtration through ultrasonication and autoclaving. The exceptional thermal and mechanical stability of nanotubes, and the high surface area, ease and cost-effective fabrication of the nanotube membranes may allow them to compete with ceramic- and polymer-based separation membranes used commercially.

  16. Carbon nanotube filters.

    PubMed

    Srivastava, A; Srivastava, O N; Talapatra, S; Vajtai, R; Ajayan, P M

    2004-09-01

    Over the past decade of nanotube research, a variety of organized nanotube architectures have been fabricated using chemical vapour deposition. The idea of using nanotube structures in separation technology has been proposed, but building macroscopic structures that have controlled geometric shapes, density and dimensions for specific applications still remains a challenge. Here we report the fabrication of freestanding monolithic uniform macroscopic hollow cylinders having radially aligned carbon nanotube walls, with diameters and lengths up to several centimetres. These cylindrical membranes are used as filters to demonstrate their utility in two important settings: the elimination of multiple components of heavy hydrocarbons from petroleum-a crucial step in post-distillation of crude oil-with a single-step filtering process, and the filtration of bacterial contaminants such as Escherichia coli or the nanometre-sized poliovirus ( approximately 25 nm) from water. These macro filters can be cleaned for repeated filtration through ultrasonication and autoclaving. The exceptional thermal and mechanical stability of nanotubes, and the high surface area, ease and cost-effective fabrication of the nanotube membranes may allow them to compete with ceramic- and polymer-based separation membranes used commercially. PMID:15286755

  17. Initial Ares I Bending Filter Design

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark

    2007-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.

  18. Deconvolution filtering: Temporal smoothing revisited

    PubMed Central

    Bush, Keith; Cisler, Josh

    2014-01-01

    Inferences made from analysis of BOLD data regarding neural processes are potentially confounded by multiple competing sources: cardiac and respiratory signals, thermal effects, scanner drift, and motion-induced signal intensity changes. To address this problem, we propose deconvolution filtering, a process of systematically deconvolving and reconvolving the BOLD signal via the hemodynamic response function such that the resultant signal is composed of maximally likely neural and neurovascular signals. To test the validity of this approach, we compared the accuracy of BOLD signal variants (i.e., unfiltered, deconvolution filtered, band-pass filtered, and optimized band-pass filtered BOLD signals) in identifying useful properties of highly confounded, simulated BOLD data: (1) reconstructing the true, unconfounded BOLD signal, (2) correlation with the true, unconfounded BOLD signal, and (3) reconstructing the true functional connectivity of a three-node neural system. We also tested this approach by detecting task activation in BOLD data recorded from healthy adolescent girls (control) during an emotion processing task. Results for the estimation of functional connectivity of simulated BOLD data demonstrated that analysis (via standard estimation methods) using deconvolution filtered BOLD data achieved superior performance to analysis performed using unfiltered BOLD data and was statistically similar to well-tuned band-pass filtered BOLD data. Contrary to band-pass filtering, however, deconvolution filtering is built upon physiological arguments and has the potential, at low TR, to match the performance of an optimal band-pass filter. The results from task estimation on real BOLD data suggest that deconvolution filtering provides superior or equivalent detection of task activations relative to comparable analyses on unfiltered signals and also provides decreased variance over the estimate. In turn, these results suggest that standard preprocessing of the BOLD signal ignores significant sources of noise that can be effectively removed without damaging the underlying signal. PMID:24768215

  19. Phosphorus Filter

    USGS Multimedia Gallery

    Tom Kehler, fishery biologist at the U.S. Fish and Wildlife Service's Northeast Fishery Center in Lamar, Pennsylvania, checks the flow rate of water leaving a phosphorus filter column. The USGS has pioneered a new use for acid mine drainage residuals that are currently a disposal challenge, usi...

  20. Stepped nozzle

    DOEpatents

    Sutton, George P.

    1998-01-01

    An insert which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment.

  1. Stepped nozzle

    DOEpatents

    Sutton, G.P.

    1998-07-14

    An insert is described which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment. 5 figs.

  2. Testing Dual Rotary Filters - 12373

    SciTech Connect

    Herman, D.T.; Fowley, M.D.; Stefanko, D.B.; Shedd, D.A.; Houchens, C.L.

    2012-07-01

    The Savannah River National Laboratory (SRNL) installed and tested two hydraulically connected SpinTek{sup R} Rotary Micro-filter units to determine the behavior of a multiple filter system and develop a multi-filter automated control scheme. Developing and testing the control of multiple filters was the next step in the development of the rotary filter for deployment. The test stand was assembled using as much of the hardware planned for use in the field including instrumentation and valving. The control scheme developed will serve as the basis for the scheme used in deployment. The multi filter setup was controlled via an Emerson DeltaV control system running version 10.3 software. Emerson model MD controllers were installed to run the control algorithms developed during this test. Savannah River Remediation (SRR) Process Control Engineering personnel developed the software used to operate the process test model. While a variety of control schemes were tested, two primary algorithms provided extremely stable control as well as significant resistance to process upsets that could lead to equipment interlock conditions. The control system was tuned to provide satisfactory response to changing conditions during the operation of the multi-filter system. Stability was maintained through the startup and shutdown of one of the filter units while the second was still in operation. The equipment selected for deployment, including the concentrate discharge control valve, the pressure transmitters, and flow meters, performed well. Automation of the valve control integrated well with the control scheme and when used in concert with the other control variables, allowed automated control of the dual rotary filter system. Experience acquired on a multi-filter system behavior and with the system layout during this test helped to identify areas where the current deployment rotary filter installation design could be improved. Completion of this testing provides the necessary information on the control and system behavior that will be used in deployment on actual waste. (authors)

  3. The optimization of essential oils supercritical CO2 extraction from Lavandula hybrida through static-dynamic steps procedure and semi-continuous technique using response surface method

    PubMed Central

    Kamali, Hossein; Aminimoghadamfarouj, Noushin; Golmakani, Ebrahim; Nematollahi, Alireza

    2015-01-01

    Aim: The aim of this study was to examine and evaluate crucial variables in essential oils extraction process from Lavandula hybrida through static-dynamic and semi-continuous techniques using response surface method. Materials and Methods: Essential oil components were extracted from Lavandula hybrida (Lavandin) flowers using supercritical carbon dioxide via static-dynamic steps (SDS) procedure, and semi-continuous (SC) technique. Results: Using response surface method the optimum extraction yield (4.768%) was obtained via SDS at 108.7 bar, 48.5°C, 120 min (static: 8×15), 24 min (dynamic: 8×3 min) in contrast to the 4.620% extraction yield for the SC at 111.6 bar, 49.2°C, 14 min (static), 121.1 min (dynamic). Conclusion: The results indicated that a substantial reduction (81.56%) solvent usage (kg CO2/g oil) is observed in the SDS method versus the conventional SC method. PMID:25598636

  4. Solution of two-dimensional electromagnetic scattering problem by FDTD with optimal step size, based on a semi-norm analysis

    SciTech Connect

    Monsefi, Farid; Carlsson, Linus; Silvestrov, Sergei; Rančić, Milica; Otterskog, Magnus

    2014-12-10

    To solve the electromagnetic scattering problem in two dimensions, the Finite Difference Time Domain (FDTD) method is used. The order of convergence of the FDTD algorithm, solving the two-dimensional Maxwell’s curl equations, is estimated in two different computer implementations: with and without an obstacle in the numerical domain of the FDTD scheme. This constitutes an electromagnetic scattering problem where a lumped sinusoidal current source, as a source of electromagnetic radiation, is included inside the boundary. Confined within the boundary, a specific kind of Absorbing Boundary Condition (ABC) is chosen and the outside of the boundary is in form of a Perfect Electric Conducting (PEC) surface. Inserted in the computer implementation, a semi-norm has been applied to compare different step sizes in the FDTD scheme. First, the domain of the problem is chosen to be the free-space without any obstacles. In the second part of the computer implementations, a PEC surface is included as the obstacle. The numerical instability of the algorithms can be rather easily avoided with respect to the Courant stability condition, which is frequently used in applying the general FDTD algorithm.

  5. Microfabrication of three-dimensional filters for liposome extrusion

    NASA Astrophysics Data System (ADS)

    Baldacchini, Tommaso; Nuez, Vicente; LaFratta, Christopher N.; Grech, Joseph S.; Vullev, Valentine I.; Zadoyan, Ruben

    2015-03-01

    Liposomes play a relevant role in the biomedical field of drug delivery. The ability of these lipid vesicles to encapsulate and transport a variety of bioactive molecules has fostered their use in several therapeutic applications, from cancer treatments to the administration of drugs with antiviral activities. Size and uniformity are key parameters to take into consideration when preparing liposomes; these factors greatly influence their effectiveness in both in vitro and in vivo experiments. A popular technique employed to achieve the optimal liposome dimension (around 100 nm in diameter) and uniform size distribution is repetitive extrusion through a polycarbonate filter. We investigated two femtosecond laser direct writing techniques for the fabrication of three-dimensional filters within a microfluidics chip for liposomes extrusion. The miniaturization of the extrusion process in a microfluidic system is the first step toward a complete solution for lab-on-a-chip preparation of liposomes from vesicles self-assembly to optical characterization.

  6. Holographic Photopolymer Linear Variable Filter with Enhanced Blue Reflection

    PubMed Central

    2015-01-01

    A single beam one-step holographic interferometry method was developed to fabricate porous polymer structures with controllable pore size and location to produce compact graded photonic bandgap structures for linear variable optical filters. This technology is based on holographic polymer dispersed liquid crystal materials. By introducing a forced internal reflection, the optical reflection throughout the visible spectral region, from blue to red, is high and uniform. In addition, the control of the bandwidth of the reflection resonance, related to the light intensity and spatial porosity distributions, was investigated to optimize the optical performance. The development of portable and inexpensive personal health-care and environmental multispectral sensing/imaging devices will be possible using these filters. PMID:24517443

  7. Holographic photopolymer linear variable filter with enhanced blue reflection.

    PubMed

    Moein, Tania; Ji, Dengxin; Zeng, Xie; Liu, Ke; Gan, Qiaoqiang; Cartwright, Alexander N

    2014-03-12

    A single beam one-step holographic interferometry method was developed to fabricate porous polymer structures with controllable pore size and location to produce compact graded photonic bandgap structures for linear variable optical filters. This technology is based on holographic polymer dispersed liquid crystal materials. By introducing a forced internal reflection, the optical reflection throughout the visible spectral region, from blue to red, is high and uniform. In addition, the control of the bandwidth of the reflection resonance, related to the light intensity and spatial porosity distributions, was investigated to optimize the optical performance. The development of portable and inexpensive personal health-care and environmental multispectral sensing/imaging devices will be possible using these filters. PMID:24517443

  8. Imaging task-based optimal kV and mA selection for CT radiation dose reduction: from filtered backprojection (FBP) to statistical model based iterative reconstruction (MBIR)

    NASA Astrophysics Data System (ADS)

    Li, Ke; Gomez-Cardona, Daniel; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2015-03-01

    Optimal selections of tube potential (kV) and tube current (mA) are essential in maximizing the diagnostic potential of a given CT technology while minimizing radiation dose. The use of a lower tube potential may improve image contrast, but may also require a significantly higher tube current to compensate for the rapid decrease of tube output at lower tube potentials. Therefore, the selection of kV and mA should take those kinds of constraints as well as the specific diagnostic imaging task in to consideration. For conventional quasi-linear CT systems employing linear filtered back-projection (FBP) image reconstruction algorithm, the optimization of kV-mA combinations are relatively straightforward, as neither spatial resolution nor noise texture has significant dependence on kV and mA settings. In these cases, zero-frequency analysis such as contrast-to-noise ratio (CNR) or normalized CNR by dose (CNRD) can be used for optimal kV-mA selection. The recently introduced statistical model-based iterative reconstruction (MBIR) method, however, has introduced new challenges to optimal kV and mA selection, as both spatial resolution and noise texture become closely correlated with kV and mA. In this work, a task-based approach based on modern signal detection theory and the corresponding frequency-dependent analysis has been proposed to perform the kV and mA optimization for both FBP and MBIR. By performing exhaustive measurements of task-based detectability index through the technically accessible kV-mA parameter space, iso-detectability contours were generated and overlaid on top of iso-dose contours, from which the kV-mA pair that minimize dose yet still achieving the desired detectability level can be identified.

  9. Process optimization of positive novolac resists for electron-beam lithography resist characterization using single or multiple development steps with either a sodium-hydroxide or metal ion-free dev

    NASA Astrophysics Data System (ADS)

    Dean, Robert L.; Flores, Gary E.

    1993-09-01

    OCG895i and Tokyo Ohka OEBR2000 (both commercially available) and two experimental resists were evaluated by experimental design. The design factors investigated included developer normality, softbake temperature, and develop time with a sodium hydroxide-based developer. The design responses included optimum dose, remaining film thickness, and dose latitude (change in critical dimension per unit dose). The best results were given by AZ141C, an experimental resist from Hoechst. At 90 degree(s)C prebaking temperature, AZ141C could be imaged at 4.0 (mu) C/cm2 with good film thickness retention and dose latitude. A second set of optimization experiments was done evaluating metal ion-free developer. Finally, multiple develop processing was evaluated for improving process latitude and film thickness loss and for minimizing the dose required. A two-step process shows promise: it consists of a high initial normality develop for a short time to accomplish breakthrough of the resist surface inhibition layer, followed by a second low normality develop. Another sequence of statistically designed experiments performed to optimize this scheme and results of the optimizations are presented.

  10. Plasmonic filters.

    SciTech Connect

    Passmore, Brandon Scott; Shaner, Eric Arthur; Barrick, Todd A.

    2009-09-01

    Metal films perforated with subwavelength hole arrays have been show to demonstrate an effect known as Extraordinary Transmission (EOT). In EOT devices, optical transmission passbands arise that can have up to 90% transmission and a bandwidth that is only a few percent of the designed center wavelength. By placing a tunable dielectric in proximity to the EOT mesh, one can tune the center frequency of the passband. We have demonstrated over 1 micron of passive tuning in structures designed for an 11 micron center wavelength. If a suitable midwave (3-5 micron) tunable dielectric (perhaps BaTiO{sub 3}) were integrated with an EOT mesh designed for midwave operation, it is possible that a fast, voltage tunable, low temperature filter solution could be demonstrated with a several hundred nanometer passband. Such an element could, for example, replace certain components in a filter wheel solution.

  11. Water Filter

    NASA Astrophysics Data System (ADS)

    1982-01-01

    A compact, lightweight electrolytic water sterilizer available through Ambassador Marketing, generates silver ions in concentrations of 50 to 100 parts per billion in water flow system. The silver ions serve as an effective bactericide/deodorizer. Tap water passes through filtering element of silver that has been chemically plated onto activated carbon. The silver inhibits bacterial growth and the activated carbon removes objectionable tastes and odors caused by addition of chlorine and other chemicals in municipal water supply. The three models available are a kitchen unit, a "Tourister" unit for portable use while traveling and a refrigerator unit that attaches to the ice cube water line. A filter will treat 5,000 to 10,000 gallons of water.

  12. TU-C-BRE-11: 3D EPID-Based in Vivo Dosimetry: A Major Step Forward Towards Optimal Quality and Safety in Radiation Oncology Practice

    SciTech Connect

    Mijnheer, B; Mans, A; Olaciregui-Ruiz, I; Rozendaal, R; Spreeuw, H; Herk, M van

    2014-06-15

    Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is done offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.

  13. Eyeglass Filters

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Biomedical Optical Company of America's suntiger lenses eliminate more than 99% of harmful light wavelengths. NASA derived lenses make scenes more vivid in color and also increase the wearer's visual acuity. Distant objects, even on hazy days, appear crisp and clear; mountains seem closer, glare is greatly reduced, clouds stand out. Daytime use protects the retina from bleaching in bright light, thus improving night vision. Filtering helps prevent a variety of eye disorders, in particular cataracts and age related macular degeneration.

  14. The Lockheed alternate partial polarizer universal filter

    NASA Technical Reports Server (NTRS)

    Title, A. M.

    1976-01-01

    A tunable birefringent filter using an alternate partial polarizer design has been built. The filter has a transmission of 38% in polarized light. Its full width at half maximum is .09A at 5500A. It is tunable from 4500 to 8500A by means of stepping motor actuated rotating half wave plates and polarizers. Wave length commands and thermal compensation commands are generated by a PPD 11/10 minicomputer. The alternate partial polarizer universal filter is compared with the universal birefringent filter and the design techniques, construction methods, and filter performance are discussed in some detail. Based on the experience of this filter some conclusions regarding the future of birefringent filters are elaborated.

  15. Volterra filters for quantum estimation and detection

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2015-12-01

    The implementation of optimal statistical inference protocols for high-dimensional quantum systems is often computationally expensive. To avoid the difficulties associated with optimal techniques, here I propose an alternative approach to quantum estimation and detection based on Volterra filters. Volterra filters have a clear hierarchy of computational complexities and performances, depend only on finite-order correlation functions, and are applicable to systems with no simple Markovian model. These features make Volterra filters appealing alternatives to optimal nonlinear protocols for the inference and control of complex quantum systems. Applications of the first-order Volterra filter to continuous-time quantum filtering, the derivation of a Heisenberg-picture uncertainty relation, quantum state tomography, and qubit readout are discussed.

  16. Sub-wavelength efficient polarization filter (SWEP filter)

    DOEpatents

    Simpson, Marcus L.; Simpson, John T.

    2003-12-09

    A polarization sensitive filter includes a first sub-wavelength resonant grating structure (SWS) for receiving incident light, and a second SWS. The SWS are disposed relative to one another such that incident light which is transmitted by the first SWS passes through the second SWS. The filter has a polarization sensitive resonance, the polarization sensitive resonance substantially reflecting a first polarization component of incident light while substantially transmitting a second polarization component of the incident light, the polarization components being orthogonal to one another. A method for forming polarization filters includes the steps of forming first and second SWS, the first and second SWS disposed relative to one another such that a portion of incident light applied to the first SWS passes through the second SWS. A method for separating polarizations of light, includes the steps of providing a filter formed from a first and second SWS, shining incident light having orthogonal polarization components on the first SWS, and substantially reflecting one of the orthogonal polarization components while substantially transmitting the other orthogonal polarization component. A high Q narrowband filter includes a first and second SWS, the first and second SWS are spaced apart a distance being at least one half an optical wavelength.

  17. Organic solvent-free air-assisted liquid-liquid microextraction for optimized extraction of illegal azo-based dyes and their main metabolite from spices, cosmetics and human bio-fluid samples in one step.

    PubMed

    Barfi, Behruz; Asghari, Alireza; Rajabi, Maryam; Sabzalian, Sedigheh

    2015-08-15

    Air-assisted liquid-liquid microextraction (AALLME) has unique capabilities to develop as an organic solvent-free and one-step microextraction method, applying ionic-liquids as extraction solvent and avoiding centrifugation step. Herein, a novel and simple eco-friendly method, termed one-step air-assisted liquid-liquid microextraction (OS-AALLME), was developed to extract some illegal azo-based dyes (including Sudan I to IV, and Orange G) from food and cosmetic products. A series of experiments were investigated to achieve the most favorable conditions (including extraction solvent: 77μL of 1-Hexyl-3-methylimidazolium hexafluorophosphate; sample pH 6.3, without salt addition; and extraction cycles: 25 during 100s of sonication) using a central composite design strategy. Under these conditions, limits of detection, linear dynamic ranges, enrichment factors and consumptive indices were in the range of 3.9-84.8ngmL(-1), 0.013-3.1μgmL(-1), 33-39, and 0.13-0.15, respectively. The results showed that -as well as its simplicity, fastness, and use of no hazardous disperser and extraction solvents- OS-AALLME is an enough sensitive and efficient method for the extraction of these dyes from complex matrices. After optimization and validation, OS-AALLME was applied to estimate the concentration of 1-amino-2-naphthol in human bio-fluids as a main reductive metabolite of selected dyes. Levels of 1-amino-2-naphthol in plasma and urinary excretion suggested that this compound may be used as a new potential biomarker of these dyes in human body. PMID:26149246

  18. Testing of a transmission-filter coronagraph for ground-based imaging of exoplanets

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Ren, Deqing; Zhu, Yongtian; Zhang, Xi; Wang, Xue

    2010-07-01

    We present the latest laboratory test of a new coronagraph using one step-transmission filter at the visible wavelength. The primary goal of this work is to test the feasibility and stability of the coronagraph, which is designed for the ground-based telescope especially with a central obstruction and spider structures. The transmission filter is circular symmetrically coated with inconel film on one surface and manufactured with a precisely position-controlled physical mask during the coating procedure. At first, the transmission tolerance of the filter is controlled within 5% for each circular step. The target contrast of the coronagraph is set to be 10-5~10-7 at an inner working angle around 5?/D. Based on the high-contrast imaging test-bed in the laboratory, the point spread function image of the coronagraph is obtained and it has delivered a contrast better than 10-6 at 5?/D. As a follow-up effort, the transmission error should be controlled in 2% and the transmission for such filter will be optimized in the near infrared wavelength, which should deliver better performances. Finally, it is shown that the transmission-filter coronagraph is a promising technique to be used for the direct imaging of exoplanets from the ground.

  19. Nonlinear Filtering with Fractional Brownian Motion

    SciTech Connect

    Amirdjanova, A.

    2002-12-19

    Our objective is to study a nonlinear filtering problem for the observation process perturbed by a Fractional Brownian Motion (FBM) with Hurst index 1/2 optimal filter is derived.

  20. Rocket noise filtering system using digital filters

    NASA Technical Reports Server (NTRS)

    Mauritzen, David

    1990-01-01

    A set of digital filters is designed to filter rocket noise to various bandwidths. The filters are designed to have constant group delay and are implemented in software on a general purpose computer. The Parks-McClellan algorithm is used. Preliminary tests are performed to verify the design and implementation. An analog filter which was previously employed is also simulated.

  1. The Next Step in Ice Flow Measurement from Optical Imagery: Comprehensive Mapping Of Ice Sheet Flow in Landsat 8 Imagery Using Spatial Frequency Filtering, Enabled by High Radiometric Sensitivity

    NASA Astrophysics Data System (ADS)

    Fahnestock, M. A.; Scambos, T. A.; Klinger, M. J.

    2014-12-01

    The advent of large area satellite coverage in the visible spectrum enabled satellite-based tracking of ice sheet flow just over twenty years ago. Following this, rapid development of techniques for imaging radar data enabled the wide-area mapping and time series coverage that SAR has brought to the documentation of changing ice discharge. We report on the maturation of feature tracking in visible-band satellite imagery of the ice sheets enabled by the high radiometric resolution and accurate geolocation delivered by Landsat 8, and apply this to mapping ice flow in the interiors of Antarctica and Greenland. The high radiometric resolution of Landsat 8 enables one to track subtle patterns on the surface of the ice sheet, unique at spatial scales of a few hundred meters, between images separated by multiple orbit cycles. In areas with significant dynamic topography generated by ice flow, this requires use of simple spatial filtering techniques first applied by Scambos et al. 1992. The result is densely sampled maps of surface motion that begin to rival the coverage available from SAR speckle tracking and interferometry. Displacement accuracy can approach one tenth of a pixel for reasonable chip sizes using conventional normalized cross-correlation; this can exceed the geolocation accuracy of the scenes involved, but coverage is sufficient to allow correction strategies based on very slow moving ice. The advance in radiometry, geo-location, and tracking tools is augmented by an increased rate of acquisition by Landsat 8. This helps mitigate the issue of cloud cover, as much of every 16-day orbit cycle over ice is acquired, maximizing the acquisition of clear-sky scenes. Using the correlation techniques common to IMCORR and later software, modern libraries, and single-cpu hardware, we are able to process full Landsat 8 scene pairs in a few minutes, allowing comprehensive analysis of ~1K available ice sheet image pairs in a few days.

  2. A Simple Methodological Approach for Counting and Identifying Culturable Viruses Adsorbed to Cellulose Nitrate Membrane Filters

    PubMed Central

    Papageorgiou, Georgios T.; Moc-Llivina, Laura; Christodoulou, Christina G.; Lucena, Francisco; Akkelidou, Dina; Ioannou, Eleni; Jofre, Juan

    2000-01-01

    We identified conditions under which Buffalo green monkey cells grew on the surfaces of cellulose nitrate membrane filters in such a way that they covered the entire surface of each filter and penetrated through the pores. When such conditions were used, poliovirus that had previously been adsorbed on the membranes infected the cells and replicated. A plaque assay method and a quantal method (most probable number of cytopathic units) were used to detect and count the viruses adsorbed on the membrane filters. Polioviruses in aqueous suspensions were then concentrated by adsorption to cellulose membrane filters and were subsequently counted without elution, a step which is necessary when the commonly used methods are employed. The pore size of the membrane filter, the sample contents, and the sample volume were optimized for tap water, seawater, and a 0.25 M glycine buffer solution. The numbers of viruses recovered under the optimized conditions were more than 50% greater than the numbers counted by the standard plaque assay. When ceftazidime was added to the assay medium in addition to the antibiotics which are typically used, the method could be used to study natural samples with low and intermediate levels of microbial pollution without decontamination of the samples. This methodological approach also allowed plaque hybridization either directly on cellulose nitrate membranes or on Hybond N+ membranes after the preparations were transferred. PMID:10618223

  3. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  4. Generating an optimal DTM from airborne laser scanning data for landslide mapping in a tropical forest environment

    NASA Astrophysics Data System (ADS)

    Razak, Khamarrul Azahari; Santangelo, Michele; Van Westen, Cees J.; Straatsma, Menno W.; de Jong, Steven M.

    2013-05-01

    Landslide inventory maps are fundamental for assessing landslide susceptibility, hazard, and risk. In tropical mountainous environments, mapping landslides is difficult as rapid and dense vegetation growth obscures landslides soon after their occurrence. Airborne laser scanning (ALS) data have been used to construct the digital terrain model (DTM) under dense vegetation, but its reliability for landslide recognition in the tropics remains surprisingly unknown. This study evaluates the suitability of ALS for generating an optimal DTM for mapping landslides in the Cameron Highlands, Malaysia. For the bare-earth extraction, we used hierarchical robust filtering algorithm and a parameterization with three sequential filtering steps. After each filtering step, four interpolations techniques were applied, namely: (i) the linear prediction derived from the SCOP++ (SCP), (ii) the inverse distance weighting (IDW), (iii) the natural neighbor (NEN) and (iv) the topo-to-raster (T2R). We assessed the quality of 12 DTMs in two ways: (1) with respect to 448 field-measured terrain heights and (2) based on the interpretability of landslides. The lowest root-mean-square error (RMSE) was 0.89 m across the landscape using three filtering steps and linear prediction as interpolation method. However, we found that a less stringent DTM filtering unveiled more diagnostic micro-morphological features, but also retained some of vegetation. Hence, a combination of filtering steps is required for optimal landslide interpretation, especially in forested mountainous areas. IDW was favored as the interpolation technique because it combined computational times more reasonably without adding artifacts to the DTM than T2R and NEN, which performed relatively well in the first and second filtering steps, respectively. The laser point density and the resulting ground point density after filtering are key parameters for producing a DTM applicable to landslide identification. The results showed that the ALS-derived DTMs allowed mapping and classifying landslides beneath equatorial mountainous forests, leading to a better understanding of hazardous geomorphic problems in tropical regions.

  5. SU-E-I-62: Assessing Radiation Dose Reduction and CT Image Optimization Through the Measurement and Analysis of the Detector Quantum Efficiency (DQE) of CT Images Using Different Beam Hardening Filters

    SciTech Connect

    Collier, J; Aldoohan, S; Gill, K

    2014-06-01

    Purpose: Reducing patient dose while maintaining (or even improving) image quality is one of the foremost goals in CT imaging. To this end, we consider the feasibility of optimizing CT scan protocols in conjunction with the application of different beam-hardening filtrations and assess this augmentation through noise-power spectrum (NPS) and detector quantum efficiency (DQE) analysis. Methods: American College of Radiology (ACR) and Catphan phantoms (The Phantom Laboratory) were scanned with a 64 slice CT scanner when additional filtration of thickness and composition (e.g., copper, nickel, tantalum, titanium, and tungsten) had been applied. A MATLAB-based code was employed to calculate the image of noise NPS. The Catphan Image Owl software suite was then used to compute the modulated transfer function (MTF) responses of the scanner. The DQE for each additional filter, including the inherent filtration, was then computed from these values. Finally, CT dose index (CTDIvol) values were obtained for each applied filtration through the use of a 100 mm pencil ionization chamber and CT dose phantom. Results: NPS, MTF, and DQE values were computed for each applied filtration and compared to the reference case of inherent beam-hardening filtration only. Results showed that the NPS values were reduced between 5 and 12% compared to inherent filtration case. Additionally, CTDIvol values were reduced between 15 and 27% depending on the composition of filtration applied. However, no noticeable changes in image contrast-to-noise ratios were noted. Conclusion: The reduction in the quanta noise section of the NPS profile found in this phantom-based study is encouraging. The reduction in both noise and dose through the application of beam-hardening filters is reflected in our phantom image quality. However, further investigation is needed to ascertain the applicability of this approach to reducing patient dose while maintaining diagnostically acceptable image qualities in a clinical setting.

  6. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.; Green, Richard; Bian, Fuyan; Strauss, Michael A.; Buck, Zoë; Annis, James; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)

  7. ADVANCED HOT GAS FILTER DEVELOPMENT

    SciTech Connect

    E.S. Connolly; G.D. Forsythe

    2000-09-30

    DuPont Lanxide Composites, Inc. undertook a sixty-month program, under DOE Contract DEAC21-94MC31214, in order to develop hot gas candle filters from a patented material technology know as PRD-66. The goal of this program was to extend the development of this material as a filter element and fully assess the capability of this technology to meet the needs of Pressurized Fluidized Bed Combustion (PFBC) and Integrated Gasification Combined Cycle (IGCC) power generation systems at commercial scale. The principal objective of Task 3 was to build on the initial PRD-66 filter development, optimize its structure, and evaluate basic material properties relevant to the hot gas filter application. Initially, this consisted of an evaluation of an advanced filament-wound core structure that had been designed to produce an effective bulk filter underneath the barrier filter formed by the outer membrane. The basic material properties to be evaluated (as established by the DOE/METC materials working group) would include mechanical, thermal, and fracture toughness parameters for both new and used material, for the purpose of building a material database consistent with what is being done for the alternative candle filter systems. Task 3 was later expanded to include analysis of PRD-66 candle filters, which had been exposed to actual PFBC conditions, development of an improved membrane, and installation of equipment necessary for the processing of a modified composition. Task 4 would address essential technical issues involving the scale-up of PRD-66 candle filter manufacturing from prototype production to commercial scale manufacturing. The focus would be on capacity (as it affects the ability to deliver commercial order quantities), process specification (as it affects yields, quality, and costs), and manufacturing systems (e.g. QA/QC, materials handling, parts flow, and cost data acquisition). Any filters fabricated during this task would be used for product qualification tests being conducted by Westinghouse at Foster-Wheeler's Pressurized Circulating Fluidized Bed (PCFBC) test facility in Karhula, Finland. Task 5 was designed to demonstrate the improvements implemented in Task 4 by fabricating fifty 1.5-meter hot gas filters. These filters were to be made available for DOE-sponsored field trials at the Power Systems Development Facility (PSDF), operated by Southern Company Services in Wilsonville, Alabama.

  8. Filter apparatus

    SciTech Connect

    Zahedi, K.; Alexander, J. C.; Zieve, P. B.

    1985-03-19

    Electrified filter bed apparatus includes inner and outer cylindrical bed-retaining structures for confining a granular bed therebetween. The inner cylindrical structure may comprise a cage of superposed frusto-conical louvers and the outer structure may comprise a similar cage or a perforated cylindrical, liquid-drainage sheet. A cylindrical bed electrode for electrically charging the bed granules is suspended between the retaining structures. The tubular bed surrounds an internal gas passage from which polluted gas flows through the bed from the inside out. Gas enters the internal passage from above through an ionizer section of the apparatus. The ionizer section may include a disc-type ionizer assembly in an ionizer tube. The tube may form an extension of the inner louver cage. A corona discharge may be formed between the disc and the ionizer tube by providing electric current to the discs, whereby the corona discharge electrically charges particulate material within the gas stream. The discs may carry radially protruding needles defining circumferential corona discharge points. A blowdown system may be provided for cleaning the ionizer discs and the tube wall in the region of the discs. The apparatus may include means for avoiding blowout of bed granules from between the outer louvers, and a system for washing pollutant-coated bed granules.

  9. SAR image speckle suppression based on stack filters

    NASA Astrophysics Data System (ADS)

    Bai, Zhengyao; He, Peikun

    2003-09-01

    Speckle suppression is an important step in syntetic aperture radar (SAR) image processing. This paper addresses the problem of reducing speckle in SAR images by employing stack filters. Median filters have been used to successfully reduce non-Gaussian noises and impulse noises. Stack filters are a very large class of nonlinear filters that possess the threshold decomposition and the stack property. In this paper, stacking median filters are introduced. An algorithm for SAR speckle suppression based on stacking median filters is proposed. Threshold selection is also discussed. Experiments are performed using SAR images at different threshold differences to show the proposed algorithm's effectiveness.

  10. Active filter application guide. Final report

    SciTech Connect

    Not Available

    1998-01-01

    Nonlinear loads interacting with a utility can cause harmonic currents and voltages. Nonlinear loads include arcing loads, power converters that use switching devices, and saturable transformers and reactors. When reactive loads interact with harmonic sources, the results can be harmonic distortion, malfunction of harmonic sensitive equipment, and capacitor overload. To solve these harmonic disturbances, most passive harmonic filters must be custom designed to operate with site-specific conditions. Active filters, on the other hand, offer the potential of a single black-box solution that is relatively independent of system parameters. Power quality problems attributable to harmonic voltages and currents are increasing. Traditionally, passive harmonic filters have been used to solve these problems. A more recent approach for harmonic compensation uses active filters. The Active Filter Application Guide covers fundamentals of harmonics, discusses harmonic producing loads, presents harmonic filtering principles (both active and passive), and provides a step-by-step application guide for analyzing and specifying an active filter. Also included in the Guide are two active-filter case studies. Each demonstrates how the application guide can be used to select and specify solutions for both single harmonic load and multiple harmonic producing loads at a clustered site.

  11. Step Detection in Single-Molecule Real Time Trajectories Embedded in Correlated Noise

    PubMed Central

    Arunajadai, Srikesh G.; Cheng, Wei

    2013-01-01

    Single-molecule real time trajectories are embedded in high noise. To extract kinetic or dynamic information of the molecules from these trajectories often requires idealization of the data in steps and dwells. One major premise behind the existing single-molecule data analysis algorithms is the Gaussian white noise, which displays no correlation in time and whose amplitude is independent on data sampling frequency. This so-called white noise is widely assumed but its validity has not been critically evaluated. We show that correlated noise exists in single-molecule real time trajectories collected from optical tweezers. The assumption of white noise during analysis of these data can lead to serious over- or underestimation of the number of steps depending on the algorithms employed. We present a statistical method that quantitatively evaluates the structure of the underlying noise, takes the noise structure into account, and identifies steps and dwells in a single-molecule trajectory. Unlike existing data analysis algorithms, this method uses Generalized Least Squares (GLS) to detect steps and dwells. Under the GLS framework, the optimal number of steps is chosen using model selection criteria such as Bayesian Information Criterion (BIC). Comparison with existing step detection algorithms showed that this GLS method can detect step locations with highest accuracy in the presence of correlated noise. Because this method is automated, and directly works with high bandwidth data without pre-filtering or assumption of Gaussian noise, it may be broadly useful for analysis of single-molecule real time trajectories. PMID:23533612

  12. Adaptive particle filtering

    NASA Astrophysics Data System (ADS)

    Stevens, Mark R.; Gutchess, Dan; Checka, Neal; Snorrason, Magns

    2006-05-01

    Image exploitation algorithms for Intelligence, Surveillance and Reconnaissance (ISR) and weapon systems are extremely sensitive to differences between the operating conditions (OCs) under which they are trained and the extended operating conditions (EOCs) in which the fielded algorithms are tested. As an example, terrain type is an important OC for the problem of tracking hostile vehicles from an airborne camera. A system designed to track cars driving on highways and on major city streets would probably not do well in the EOC of parking lots because of the very different dynamics. In this paper, we present a system we call ALPS for Adaptive Learning in Particle Systems. ALPS takes as input a sequence of video images and produces labeled tracks. The system detects moving targets and tracks those targets across multiple frames using a multiple hypothesis tracker (MHT) tightly coupled with a particle filter. This tracker exploits the strengths of traditional MHT based tracking algorithms by directly incorporating tree-based hypothesis considerations into the particle filter update and resampling steps. We demonstrate results in a parking lot domain tracking objects through occlusions and object interactions.

  13. Polyphase filtering on the TriMedia core

    NASA Astrophysics Data System (ADS)

    Beemster, M.; van Inge, A.; Sijstermans, Frans

    1997-01-01

    Lately, VLIW architectures have become popular because of their good cost-performance ratio for e.g. multimedia applications. Multimedia applications are characterized by regular signal processing and, therefore, they are apt for analysis by compilers. VLIW architectures exploit this by scheduling the instruction stream at compile time and, thus, reducing the complexity and costs of instruction issue hardware. However, sometimes we encounter signal processing algorithms that we would like to be regular and predictable but that are so only to a certain extent. Polyphase filtering is one such algorithm. It contains a regular filter part, but its input and output streams run at rates that are not correlated to each other in a simple way. Compile time analysis is, therefore, only partly possible, which poses an inherent problem for VLIW architectures. In this paper, we describe the steps that we went through to optimize the polyphase filter for a specific instance of a VLIW architecture: the Philips TriMedia processor. We show which architectural features help to make the TriMedia processor more efficient for such irregular algorithms.

  14. Recursive Implementations of the Schmidt-Kalman `Consider' Filter

    NASA Astrophysics Data System (ADS)

    Zanetti, Renato; D'Souza, Christopher

    2015-11-01

    One method to account for parameters errors in the Kalman filter is to `consider' their effect in the so-called Schmidt-Kalman filter. This paper addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU Schmidt-Kalman filter is proposed. The non-optimality of the recursive Schmidt-Kalman filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  15. An online novel adaptive filter for denoising time series measurements.

    PubMed

    Willis, Andrew J

    2006-04-01

    A nonstationary form of the Wiener filter based on a principal components analysis is described for filtering time series data possibly derived from noisy instrumentation. The theory of the filter is developed, implementation details are presented and two examples are given. The filter operates online, approximating the maximum a posteriori optimal Bayes reconstruction of a signal with arbitrarily distributed and non stationary statistics. PMID:16649562

  16. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  17. Optical ranked-order filtering using threshold decomposition

    SciTech Connect

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1990-08-14

    This patent describes a hybrid optical/electronic system. It performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  18. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P. (West Lafayette, IN); Ochoa, Ellen (Pleasanton, CA); Sweeney, Donald W. (Alamo, CA)

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  19. Genetically Engineered Microelectronic Infrared Filters

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    1998-01-01

    A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.

  20. The effect of spectral filters on reading speed and accuracy following stroke

    PubMed Central

    Beasley, Ian G.; Davies, Leon N.

    2013-01-01

    Purpose The aim of the study was to determine the effect of optimal spectral filters on reading performance following stroke. Methods Seventeen stroke subjects, aged 4385, were considered with an age-matched Control Group (n=17). Subjects undertook the Wilkins Rate of Reading Test on three occasions: (i) using an optimally selected spectral filter; (ii) subjects were randomly assigned to two groups: Group 1 used an optimal filter, whereas Group 2 used a grey filter, for two-weeks. The grey filter had similar photopic reflectance to the optimal filters, intended as a surrogate for a placebo; (iii) the groups were crossed over with Group 1 using a grey filter and Group 2 given an optimal filter, for two weeks, before undertaking the task once more. An increase in reading speed of >5% was considered clinically relevant. Results Initial use of a spectral filter in the stroke cohort, increased reading speed by ?8%, almost halving error scores, findings not replicated in controls. Prolonged use of an optimal spectral filter increased reading speed by >9% for stroke subjects; errors more than halved. When the same subjects switched to using a grey filter, reading speed reduced by ?4%. A second group of stroke subjects used a grey filter first; reading speed decreased by ?3% but increased by ?4% with an optimal filter, with error scores almost halving. Conclusions The present study has shown that spectral filters can immediately improve reading speed and accuracy following stroke, whereas prolonged use does not increase these benefits significantly.

  1. Texture classification using autoregressive filtering

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.; Lee, M.

    1984-01-01

    A general theory of image texture models is proposed and its applicability to the problem of scene segmentation using texture classification is discussed. An algorithm, based on half-plane autoregressive filtering, which optimally utilizes second order statistics to discriminate between texture classes represented by arbitrary wide sense stationary random fields is described. Empirical results of applying this algorithm to natural and sysnthesized scenes are presented and future research is outlined.

  2. The J-PAS filter system

    NASA Astrophysics Data System (ADS)

    Marin-Franch, Antonio; Taylor, Keith; Cenarro, Javier; Cristobal-Hornillos, David; Moles, Mariano

    2015-08-01

    J-PAS (Javalambre-PAU Astrophysical Survey) is a Spanish-Brazilian collaboration to conduct a narrow-band photometric survey of 8500 square degrees of northern sky using an innovative filter system of 59 filters, 56 relatively narrow-band (FWHM=14.5 nm) filters continuously populating the spectrum between 350 to 1000nm in 10nm steps, plus 3 broad-band filters. This filter system will be able to produce photometric redshifts with a precision of 0.003(1 + z) for Luminous Red Galaxies, allowing J-PAS to measure the radial scale of the Baryonic Acoustic Oscillations. The J-PAS survey will be carried out using JPCam, a 14-CCD mosaic camera using the new e2v 9k-by-9k, 10?m pixel, CCDs mounted on the JST/T250, a dedicated 2.55m wide-field telescope at the Observatorio Astrofsico de Javalambre (OAJ) near Teruel, Spain. The filters will operate in a fast (f/3.6) converging beam. The requirements for average transmissions greater than 85% in the passband, <10-5 blocking from 250 to 1050nm, steep bandpass edges and high image quality impose significant challenges for the production of the J-PAS filters that have demanded the development of new design solutions. This talk presents the J-PAS filter system and describes the most challenging requirements and adopted design strategies. Measurements and tests of the first manufactured filters are also presented.

  3. Recirculating electric air filter

    DOEpatents

    Bergman, W.

    1985-01-09

    An electric air filter cartridge has a cylindrical inner high voltage electrode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  4. HEPA filter dissolution process

    DOEpatents

    Brewer, K.N.; Murphy, J.A.

    1994-02-22

    A process is described for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal. 4 figures.

  5. Hepa filter dissolution process

    DOEpatents

    Brewer, Ken N. (Arco, ID); Murphy, James A. (Idaho Falls, ID)

    1994-01-01

    A process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.

  6. HEPA filter dissolution process

    SciTech Connect

    Brewer, K.N.; Murphy, J.A.

    1992-12-31

    This invention is comprised of a process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.

  7. Recirculating electric air filter

    DOEpatents

    Bergman, Werner (Pleasanton, CA)

    1986-01-01

    An electric air filter cartridge has a cylindrical inner high voltage eleode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  8. Metal-dielectric metameric filters for optically variable devices

    NASA Astrophysics Data System (ADS)

    Xiao, Lixiang; Chen, Nan; Deng, Zihao; Wang, Xiaozhong; Guo, Rong; Bu, Yikun

    2016-01-01

    A pair of metal-dielectric metameric filters that could create a hidden image was presented for the first time. The structure of the filters is simple and only six layers for filter A and five layers for filter B. The prototype filters were designed by using the film color target optimization method and the designed results show that, at normal observation angle, the reflected colors of the pair of filters are both green and the color difference index between them is only 0.9017. At observation angle of 60, the filter A is violet and the filter B is blue. The filters were fabricated by remote plasma sputtering process and the experimental results were in accordance with the designs.

  9. Geomagnetic field modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Data sets selected for mini-batches and the software modifications required for processing these sets are described. Initial analysis was performed on minibatch field model recovery. Studies are being performed to examine the convergence of the solutions and the maximum expansion order the data will support in the constant and secular terms.

  10. 3D early embryogenesis image filtering by nonlinear partial differential equations.

    PubMed

    Kriv, Z; Mikula, K; Peyriras, N; Rizzi, B; Sarti, A; Stasov, O

    2010-08-01

    We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which are artificially connected due to acquisition error intrinsically linked to physics of LSM. In all studied aspects it turned out that the nonlinear diffusion filter which is called geodesic mean curvature flow (GMCF) has the best performance. PMID:20457535

  11. A Filtering Method For Gravitationally Stratified Flows

    SciTech Connect

    Gatti-Bono, Caroline; Colella, Phillip

    2005-04-25

    Gravity waves arise in gravitationally stratified compressible flows at low Mach and Froude numbers. These waves can have a negligible influence on the overall dynamics of the fluid but, for numerical methods where the acoustic waves are treated implicitly, they impose a significant restriction on the time step. A way to alleviate this restriction is to filter out the modes corresponding to the fastest gravity waves so that a larger time step can be used. This paper presents a filtering strategy of the fully compressible equations based on normal mode analysis that is used throughout the simulation to compute the fast dynamics and that is able to damp only fast gravity modes.

  12. Stochastic Vorticity and Associated Filtering Theory

    SciTech Connect

    Amirdjanova, A.; Kallianpur, G.

    2002-12-19

    The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.

  13. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  14. One Step Forward, Half a Step Backward?

    ERIC Educational Resources Information Center

    Russo, Charles J.

    2004-01-01

    More than thirty cases involving desegregation of public school systems handed down in the first 25 years after Brown v. Board of Education, Topeka, Kansas, by the U.S. Supreme Court are discussed. However, the last 25 years have resulted in a situation of having the nation taking one step forward and half a step backwards, due to the conditions

  15. Effects of electron beam irradiation of cellulose acetate cigarette filters

    NASA Astrophysics Data System (ADS)

    Czayka, M.; Fisch, M.

    2012-07-01

    A method to reduce the molecular weight of cellulose acetate used in cigarette filters by using electron beam irradiation is demonstrated. Radiation levels easily obtained with commercially available electron accelerators result in a decrease in average molecular weight of about six-times with no embrittlement, or significant change in the elastic behavior of the filter. Since a first step in the biodegradation of cigarette filters is reduction in the filter material's molecular weight this invention has the potential to allow the production of significantly faster degrading filters.

  16. ARRANGEMENT FOR REPLACING FILTERS

    DOEpatents

    Blomgren, R.A.; Bohlin, N.J.C.

    1957-08-27

    An improved filtered air exhaust system which may be continually operated during the replacement of the filters without the escape of unfiltered air is described. This is accomplished by hermetically sealing the box like filter containers in a rectangular tunnel with neoprene covered sponge rubber sealing rings coated with a silicone impregnated pneumatic grease. The tunnel through which the filters are pushed is normal to the exhaust air duct. A number of unused filters are in line behind the filters in use, and are moved by a hydraulic ram so that a fresh filter is positioned in the air duct. The used filter is pushed into a waiting receptacle and is suitably disposed. This device permits a rapid and safe replacement of a radiation contaminated filter without interruption to the normal flow of exhaust air.

  17. Stepping motor controller

    SciTech Connect

    Bourret, S.C.; Swansen, J.E.

    1984-08-07

    A stepping motor is microprocessingly controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  18. Step-Growth Polymerization.

    ERIC Educational Resources Information Center

    Stille, J. K.

    1981-01-01

    Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)

  19. Stepping motor controller

    DOEpatents

    Bourret, Steven C. (Los Alamos, NM); Swansen, James E. (Los Alamos, NM)

    1984-01-01

    A stepping motor is microprocessingly controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  20. Stepping motor controller

    DOEpatents

    Bourret, S.C.; Swansen, J.E.

    1982-07-02

    A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  1. Flat microwave photonic filter based on hybrid of two filters

    NASA Astrophysics Data System (ADS)

    Qi, Chunhui; Pei, Li; Ning, Tigang; Li, Jing; Gao, Song

    2010-05-01

    A new microwave photonic filter (MPF) hybrid of two filters that can realize both multiple taps and a flat bandpass or bandstop response is presented. Based on the phase character of a Mach-Zehnder modulator (MZM), a two taps finite impulse response (FIR) filter is obtained as the first part. The second part is obtained by taking full advantage of the wavelength selectivity of the fiber Bragg grating (FBG) and the gain of a erbium-doped fiber (EDF). Combining the two filters, the flat bandpass or bandstop response is realized by changing the coupler's factor k, the reflectivity of FBG1 R1 or the gain of the EDF g. Optimizing the system parameters, a flat bandpass response with amplitude depth of more than 45 dB is obtained at k = 0.5, R1 = 0.33, g = 10, and a flat bandstop response is also obtained at k = 0.4, R1 = 0.5, g = 2. In addition, the free-spectral range (FSR) can be controlled by changing the length of the EDF and the length difference between two MZMs. The method is proved feasible by some experiments. Such a method offers realistic solutions to support future radio-frequency (RF) optical communication systems.

  2. Robust depth filter sizing for centrate clarification.

    PubMed

    Lutz, Herb; Chefer, Kate; Felo, Michael; Cacace, Benjamin; Hove, Sarah; Wang, Bin; Blanchard, Mark; Oulundsen, George; Piper, Rob; Zhao, Xiaoyang

    2015-11-01

    Cellulosic depth filters embedded with diatomaceous earth are widely used to remove colloidal cell debris from centrate as a secondary clarification step during the harvest of mammalian cell culture fluid. The high cost associated with process failure in a GMP (Good Manufacturing Practice) environment highlights the need for a robust process scale depth filter sizing that allows for (1) stochastic batch-to-batch variations from filter media, bioreactor feed and operation, and (2) systematic scaling differences in average performance between filter sizes and formats. Matched-lot depth filter media tested at the same conditions with consecutive batches of the same molecule were used to assess the sources and magnitudes of process variability. Depth filter sizing safety factors of 1.2-1.6 allow a filtration process to compensate for random batch-to-batch process variations. Matched-lot depth filter media in four different devices tested simultaneously at the same conditions was used with a common feed to assess scaling effects. All filter devices showed <11% capacity difference and the Pod format devices showed no statistically different capacity differences. © 2015 American Institute of Chemical Engineers Biotechnol. Prog., 31:1542-1550, 2015. PMID:26518411

  3. Rigid porous filter

    DOEpatents

    Chiang, Ta-Kuan (Morgantown, WV); Straub, Douglas L. (Morgantown, WV); Dennis, Richard A. (Morgantown, WV)

    2000-01-01

    The present invention involves a porous rigid filter including a plurality of concentric filtration elements having internal flow passages and forming external flow passages there between. The present invention also involves a pressure vessel containing the filter for the removal of particulates from high pressure particulate containing gases, and further involves a method for using the filter to remove such particulates. The present filter has the advantage of requiring fewer filter elements due to the high surface area-to-volume ratio provided by the filter, requires a reduced pressure vessel size, and exhibits enhanced mechanical design properties, improved cleaning properties, configuration options, modularity and ease of fabrication.

  4. Filter type gas sampler with filter consolidation

    DOEpatents

    Miley, Harry S.; Thompson, Robert C.; Hubbard, Charles W.; Perkins, Richard W.

    1997-01-01

    Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, whereafter the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant.

  5. Filter type gas sampler with filter consolidation

    DOEpatents

    Miley, H.S.; Thompson, R.C.; Hubbard, C.W.; Perkins, R.W.

    1997-03-25

    Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, where after the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant. 5 figs.

  6. Projection filters for modal parameter estimate for flexible structures

    NASA Technical Reports Server (NTRS)

    Huang, Jen-Kuang; Chen, Chung-Wen

    1987-01-01

    Single-mode projection filters are developed for eigensystem parameter estimates from both analytical results and test data. Explicit formulations of these projection filters are derived using the pseudoinverse matrices of the controllability and observability matrices in general use. A global minimum optimization algorithm is developed to update the filter parameters by using interval analysis method. Modal parameters can be attracted and updated in the global sense within a specific region by passing the experimental data through the projection filters. For illustration of this method, a numerical example is shown by using a one-dimensional global optimization algorithm to estimate model frequencies and dampings.

  7. Extended active optical lattice filters: filter synthesis.

    PubMed

    Dabkowski, Mieczyslaw; El Nagdi, Amr; Hunt, Louis R; Liu, Ke; Macfarlane, Duncan L; Ramakrishna, Viswanath

    2010-04-01

    In this paper, we study the synthesis of asymptotically stable filters from a unit cell of a two-dimensional tunable lattice filter architecture consisting of four four-port couplers and four waveguides containing semiconductor optical amplifiers. Upper bounds on the number of gains that will produce a filter with a priori prescribed poles, for a specific system, are obtained. We also provide sufficient conditions on the reflection-type coefficients, characterizing each four-port coupler, which ensure that real-valued gains, taking values in [0,1], exist so that the filter is asymptotically stable. Finally, we motivate the notion of a transmission zero of a filter and discuss the possibility of simultaneously placing both poles and transmission zeros for the unit cell. PMID:20360832

  8. Cordierite silicon nitride filters

    SciTech Connect

    Sawyer, J.; Buchan, B. ); Duiven, R.; Berger, M. ); Cleveland, J.; Ferri, J. )

    1992-02-01

    The objective of this project was to develop a silicon nitride based crossflow filter. This report summarizes the findings and results of the project. The project was phased with Phase I consisting of filter material development and crossflow filter design. Phase II involved filter manufacturing, filter testing under simulated conditions and reporting the results. In Phase I, Cordierite Silicon Nitride (CSN) was developed and tested for permeability and strength. Target values for each of these parameters were established early in the program. The values were met by the material development effort in Phase I. The crossflow filter design effort proceeded by developing a macroscopic design based on required surface area and estimated stresses. Then the thermal and pressure stresses were estimated using finite element analysis. In Phase II of this program, the filter manufacturing technique was developed, and the manufactured filters were tested. The technique developed involved press-bonding extruded tiles to form a filter, producing a monolithic filter after sintering. Filters manufactured using this technique were tested at Acurex and at the Westinghouse Science and Technology Center. The filters did not delaminate during testing and operated and high collection efficiency and good cleanability. Further development in areas of sintering and filter design is recommended.

  9. Stepped frequency ground penetrating radar

    DOEpatents

    Vadnais, Kenneth G.; Bashforth, Michael B.; Lewallen, Tricia S.; Nammath, Sharyn R.

    1994-01-01

    A stepped frequency ground penetrating radar system is described comprising an RF signal generating section capable of producing stepped frequency signals in spaced and equal increments of time and frequency over a preselected bandwidth which serves as a common RF signal source for both a transmit portion and a receive portion of the system. In the transmit portion of the system the signal is processed into in-phase and quadrature signals which are then amplified and then transmitted toward a target. The reflected signals from the target are then received by a receive antenna and mixed with a reference signal from the common RF signal source in a mixer whose output is then fed through a low pass filter. The DC output, after amplification and demodulation, is digitized and converted into a frequency domain signal by a Fast Fourier Transform. A plot of the frequency domain signals from all of the stepped frequencies broadcast toward and received from the target yields information concerning the range (distance) and cross section (size) of the target.

  10. Bag filters for TPP

    SciTech Connect

    L.V. Chekalov; Yu.I. Gromov; V.V. Chekalov

    2007-05-15

    Cleaning of TPP flue gases with bag filters capable of pulsed regeneration is examined. A new filtering element with a three-dimensional filtering material formed from a needle-broached cloth in which the filtration area, as compared with a conventional smooth bag, is increased by more than two times, is proposed. The design of a new FRMI type of modular filter is also proposed. A standard series of FRMI filters with a filtration area ranging from 800 to 16,000 m{sup 2} is designed for an output more than 1 million m{sub 3}/h of with respect to cleaned gas. The new bag filter permits dry collection of sulfur oxides from waste gases at TPP operating on high-sulfur coals. The design of the filter makes it possible to replace filter elements without taking the entire unit out of service.

  11. Novel Backup Filter Device for Candle Filters

    SciTech Connect

    Bishop, B.; Goldsmith, R.; Dunham, G.; Henderson, A.

    2002-09-18

    The currently preferred means of particulate removal from process or combustion gas generated by advanced coal-based power production processes is filtration with candle filters. However, candle filters have not shown the requisite reliability to be commercially viable for hot gas clean up for either integrated gasifier combined cycle (IGCC) or pressurized fluid bed combustion (PFBC) processes. Even a single candle failure can lead to unacceptable ash breakthrough, which can result in (a) damage to highly sensitive and expensive downstream equipment, (b) unacceptably low system on-stream factor, and (c) unplanned outages. The U.S. Department of Energy (DOE) has recognized the need to have fail-safe devices installed within or downstream from candle filters. In addition to CeraMem, DOE has contracted with Siemens-Westinghouse, the Energy & Environmental Research Center (EERC) at the University of North Dakota, and the Southern Research Institute (SRI) to develop novel fail-safe devices. Siemens-Westinghouse is evaluating honeycomb-based filter devices on the clean-side of the candle filter that can operate up to 870 C. The EERC is developing a highly porous ceramic disk with a sticky yet temperature-stable coating that will trap dust in the event of filter failure. SRI is developing the Full-Flow Mechanical Safeguard Device that provides a positive seal for the candle filter. Operation of the SRI device is triggered by the higher-than-normal gas flow from a broken candle. The CeraMem approach is similar to that of Siemens-Westinghouse and involves the development of honeycomb-based filters that operate on the clean-side of a candle filter. The overall objective of this project is to fabricate and test silicon carbide-based honeycomb failsafe filters for protection of downstream equipment in advanced coal conversion processes. The fail-safe filter, installed directly downstream of a candle filter, should have the capability for stopping essentially all particulate bypassing a broken or leaking candle while having a low enough pressure drop to allow the candle to be backpulse-regenerated. Forward-flow pressure drop should increase by no more than 20% because of incorporation of the fail-safe filter.

  12. 2-Step IMAT and 2-Step IMRT in three dimensions

    SciTech Connect

    Bratengeier, Klaus

    2005-12-15

    In two dimensions, 2-Step Intensity Modulated Arc Therapy (2-Step IMAT) and 2-Step Intensity Modulated Radiation Therapy (IMRT) were shown to be powerful methods for the optimization of plans with organs at risk (OAR) (partially) surrounded by a target volume (PTV). In three dimensions, some additional boundary conditions have to be considered to establish 2-Step IMAT as an optimization method. A further aim was to create rules for ad hoc adaptations of an IMRT plan to a daily changing PTV-OAR constellation. As a test model, a cylindrically symmetric PTV-OAR combination was used. The centrally placed OAR can adapt arbitrary diameters with different gap widths toward the PTV. Along the rotation axis the OAR diameter can vary, the OAR can even vanish at some axis positions, leaving a circular PTV. The width and weight of the second segment were the free parameters to optimize. The objective function f to minimize was the root of the integral of the squared difference of the dose in the target volume and a reference dose. For the problem, two local minima exist. Therefore, as a secondary criteria, the magnitude of hot and cold spots were taken into account. As a result, the solution with a larger segment width was recommended. From plane to plane for varying radii of PTV and OAR and for different gaps between them, different sets of weights and widths were optimal. Because only one weight for one segment shall be used for all planes (respectively leaf pairs), a strategy for complex three-dimensional (3-D) cases was established to choose a global weight. In a second step, a suitable segment width was chosen, minimizing f for this global weight. The concept was demonstrated in a planning study for a cylindrically symmetric example with a large range of different radii of an OAR along the patient axis. The method is discussed for some classes of tumor/organ at risk combinations. Noncylindrically symmetric cases were treated exemplarily. The product of width and weight of the additional segment as well as the integral across the segment profile was demonstrated to be an important value. This product was up to a factor of 3 larger than in the 2-D case. Even in three dimensions, the optimized 2-Step IMAT increased the homogeneity of the dose distribution in the PTV profoundly. Rules for adaptation to varying target-OAR combinations were deduced. It can be concluded that 2-Step IMAT and 2-Step IMRT are also applicable in three dimensions. In the majority of cases, weights between 0.5 and 2 will occur for the additional segment. The width-weight product of the second segment is always smaller than the normalized radius of the OAR. The width-weight product of the additional segment is strictly connected to the relevant diameter of the organ at risk and the target volume. The derived formulas can be helpful to adapt an IMRT plan to altering target shapes.

  13. Optimization of Aperiodic Waveguide Mode Converters

    SciTech Connect

    Burke, G J; White, D A; Thompson, C A

    2004-12-09

    Previous studies by Haq, Webb and others have demonstrated the design of aperiodic waveguide structures to act as filters and mode converters. These aperiodic structures have been shown to yield high efficiency mode conversion or filtering in lengths considerably shorter than structures using gradual transitions and periodic perturbations. The design method developed by Haq and others has used mode-matching models for the irregular, stepped waveguides coupled with computer optimization to achieve the design goal using a Matlab optimization routine. Similar designs are described here, using a mode matching code written in Fortran and with optimization accomplished with the downhill simplex method with simulated annealing using an algorithm from the book Numerical Recipes in Fortran. Where Haq et al. looked mainly for waveguide shapes with relatively wide cavities, we have sought lower profile designs. It is found that lower profiles can meet the design goals and result in a structure with lower Q. In any case, there appear to be very many possible configurations for a given mode conversion goal, to the point that it is unlikely to find the same design twice. Tolerance analysis was carried out for the designs to show edge sensitivity and Monte Carlo degradation rate. The mode matching code and mode conversion designs were validated by comparison with FDTD solutions for the discontinuous waveguides.

  14. MST Filterability Tests

    SciTech Connect

    Poirier, M. R.; Burket, P. R.; Duignan, M. R.

    2015-03-12

    The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). The low filter flux through the ARP has limited the rate at which radioactive liquid waste can be treated. Recent filter flux has averaged approximately 5 gallons per minute (gpm). Salt Batch 6 has had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. In addition, at the time the testing started, SRR was assessing the impact of replacing the 0.1 micron filter with a 0.5 micron filter. This report describes testing of MST filterability to investigate the impact of filter pore size and MST particle size on filter flux and testing of filter enhancers to attempt to increase filter flux. The authors constructed a laboratory-scale crossflow filter apparatus with two crossflow filters operating in parallel. One filter was a 0.1 micron Mott sintered SS filter and the other was a 0.5 micron Mott sintered SS filter. The authors also constructed a dead-end filtration apparatus to conduct screening tests with potential filter aids and body feeds, referred to as filter enhancers. The original baseline for ARP was 5.6 M sodium salt solution with a free hydroxide concentration of approximately 1.7 M.3 ARP has been operating with a sodium concentration of approximately 6.4 M and a free hydroxide concentration of approximately 2.5 M. SRNL conducted tests varying the concentration of sodium and free hydroxide to determine whether those changes had a significant effect on filter flux. The feed slurries for the MST filterability tests were composed of simple salts (NaOH, NaNO2, and NaNO3) and MST (0.2 – 4.8 g/L). The feed slurry for the filter enhancer tests contained simulated salt batch 6 supernate, MST, and filter enhancers.

  15. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  16. An active filter primer

    NASA Astrophysics Data System (ADS)

    Delagrange, A. D.

    1983-02-01

    In the past few years active filters have become very popular. This report explains why, and explains what active filters can (and can't) do. It gives the basics of active filter design, both theory and practice. It can be used as a handbook to build working active filters of the most common types. This report is an update of the original issued in 1979.

  17. Survey of digital filtering

    NASA Technical Reports Server (NTRS)

    Nagle, H. T., Jr.

    1972-01-01

    A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.

  18. Reduction of turbidity by a coal-aluminium filter

    SciTech Connect

    Collins, A.G.; Johnson, R.L.

    1985-06-01

    Coal-aluminium granular filters successfully reduce turbidity in low-alkalinity raw waters to less than 1.0 ntu, without a coagulation step or external coagulant aids. Data from experiments conducted with control and pilot-plant filters show the viability of the process and indicate the turbidity and retention mechanisms. Operational characteristics of the process are similar to those of a conventional filter. The costs of the coal-aluminium process compare favourably with those of traditional treatment.

  19. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  20. Practical Active Capacitor Filter

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr. (Inventor)

    2005-01-01

    A method and apparatus is described that filters an electrical signal. The filtering uses a capacitor multiplier circuit where the capacitor multiplier circuit uses at least one amplifier circuit and at least one capacitor. A filtered electrical signal results from a direct connection from an output of the at least one amplifier circuit.

  1. HEPA filter encapsulation

    DOEpatents

    Gates-Anderson, Dianne D. (Union City, CA); Kidd, Scott D. (Brentwood, CA); Bowers, John S. (Manteca, CA); Attebery, Ronald W. (San Lorenzo, CA)

    2003-01-01

    A low viscosity resin is delivered into a spent HEPA filter or other waste. The resin is introduced into the filter or other waste using a vacuum to assist in the mass transfer of the resin through the filter media or other waste.

  2. Filter service system

    DOEpatents

    Sellers, Cheryl L. (Peoria, IL); Nordyke, Daniel S. (Arlington Heights, IL); Crandell, Richard A. (Morton, IL); Tomlins, Gregory (Peoria, IL); Fei, Dong (Peoria, IL); Panov, Alexander (Dunlap, IL); Lane, William H. (Chillicothe, IL); Habeger, Craig F. (Chillicothe, IL)

    2008-12-09

    According to an exemplary embodiment of the present disclosure, a system for removing matter from a filtering device includes a gas pressurization assembly. An element of the assembly is removably attachable to a first orifice of the filtering device. The system also includes a vacuum source fluidly connected to a second orifice of the filtering device.

  3. Parallel DC notch filter

    NASA Astrophysics Data System (ADS)

    Kwok, Kam-Cheung; Chan, Ming-Kam

    1991-12-01

    In the process of image acquisition, the object of interest may not be evenly illuminated. So an image with shading irregularities would be produced. This type of image is very difficult to analyze. Consequently, a lot of research work concentrates on this problem. In order to remove the light illumination problem, one of the methods is to filter the image. The dc notch filter is one of the spatial domain filters used for reducing the effect of uneven light illumination on the image. Although the dc notch filter is a spatial domain filter, it is still rather time consuming to apply, especially when it is implemented on a microcomputer. To overcome the speed problem, a parallel dc notch filter is proposed. Based on the separability of the algorithm dc of notch filter, image parallelism (parallel image processing model) is used. To improve the performance of the microcomputer, an INMOS IMS B008 Module Mother Board with four IMS T800-17 is installed in the microcomputer. In fact, the dc notch filter is implemented on the transputer network. This parallel dc notch filter creates a great improvement in the computation time of the filter in comparison with the sequential one. Furthermore, the speed-up is used to analyze the performance of the parallel algorithm. As a result, parallel implementation of the dc notch filter on a transputer network gives a real-time performance of this filter.

  4. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  5. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses hidden in vibration signals and performs well for bearing fault diagnosis.

  6. B-spline design of digital FIR filter using evolutionary computation techniques

    NASA Astrophysics Data System (ADS)

    Swain, Manorama; Panda, Rutuparna

    2011-10-01

    In the forth coming era, digital filters are becoming a true replacement for the analog filter designs. Here in this paper we examine a design method for FIR filter using global search optimization techniques known as Evolutionary computation via genetic algorithm and bacterial foraging, where the filter design considered as an optimization problem. In this paper, an effort is made to design the maximally flat filters using generalized B-spline window. The key to our success is the fact that the bandwidth of the filer response can be modified by changing tuning parameters incorporated well within the B-spline function. This is an optimization problem. Direct approach has been deployed to design B-spline window based FIR digital filters. Four parameters (order, width, length and tuning parameter) have been optimized by using GA and EBFS. It is observed that the desired response can be obtained with lower order FIR filters with optimal width and tuning parameters.

  7. Gabor filter based fingerprint image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  8. Towards robust particle filters for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Peter Jan

    2015-04-01

    In recent years particle filters have matured and several variants are now available that are not degenerate for high-dimensional systems. Often they are based on ad-hoc combinations with Ensemble Kalman Filters. Unfortunately it is unclear what approximations are made when these hybrids are used. The proper way to derive particle filters for high-dimensional systems is exploring the freedom in the proposal density. It is well known that using an Ensemble Kalman Filter as proposal density (the so-called Weighted Ensemble Kalman Filter) does not work for high-dimensional systems. However, much better results are obtained when weak-constraint 4Dvar is used as proposal, leading to the implicit particle filter. Still this filter is degenerate when the number of independent observations is large. The Equivalent-Weights Particle Filter is a filter that works well in systems of arbitrary dimensions, but it contains a few tuning parameters that have to be chosen well to avoid biases. In this paper we discuss ways to derive more robust particle filters for high-dimensional systems. Using ideas from large-deviation theory and optimal transportation particle filters will be generated that are robust and work well in these systems. It will be shown that all successful filters can be derived from one general framework. Also, the performance of the filters will be tested on simple but high-dimensional systems, and, if time permits, on a high-dimensional highly nonlinear barotropic vorticity equation model.

  9. Regenerative particulate filter development

    NASA Technical Reports Server (NTRS)

    Descamp, V. A.; Boex, M. W.; Hussey, M. W.; Larson, T. P.

    1972-01-01

    Development, design, and fabrication of a prototype filter regeneration unit for regenerating clean fluid particle filter elements by using a backflush/jet impingement technique are reported. Development tests were also conducted on a vortex particle separator designed for use in zero gravity environment. A maintainable filter was designed, fabricated and tested that allows filter element replacement without any leakage or spillage of system fluid. Also described are spacecraft fluid system design and filter maintenance techniques with respect to inflight maintenance for the space shuttle and space station.

  10. A Step Circuit Program.

    ERIC Educational Resources Information Center

    Herman, Susan

    1995-01-01

    Aerobics instructors can use step aerobics to motivate students. One creative method is to add the step to the circuit workout. By incorporating the step, aerobic instructors can accommodate various fitness levels. The article explains necessary equipment and procedures, describing sample stations for cardiorespiratory fitness, muscular strength,

  11. Stepped Hydraulic Geometry in Stepped Channels

    NASA Astrophysics Data System (ADS)

    Comiti, F.; Cadol, D. D.; Wohl, E.

    2007-12-01

    Steep mountain streams typically present a stepped longitudinal profile. Such stepped channels feature tumbling flow, where hydraulic jumps represent an important source of channel roughness (spill resistance). However, the extent to which spill resistance persists up to high flows has not been ascertained yet, such that a faster, skimming flow has been envisaged to begin at those conditions. In order to analyze the relationship between flow resistance and bed morphology, a mobile bed physical model was developed at Colorado State University (Fort Collins, USA). An 8 m-long, 0.6 m-wide flume tilted at a constant 14% slope was used, testing 2 grain-size mixtures differing only for the largest fraction. Experiments were conducted under clear water conditions. Reach-averaged flow velocity was measured using salt tracers, bed morphology and flow depth by a point gage, and surface grain size using commercial image-analysis software. Starting from an initial plane bed, progressively higher flow rates were used to create different bed structures. After each bed morphology was stable with its forming discharge, lower-than-forming flows were run to build a hydraulic geometry curve. Results show that even though equilibrium slopes ranged from 8.5% to 14%, the reach-averaged flow was always sub-critical. Steps formed through a variety of mechanisms, with immobile clasts playing a dominant role by causing local scouring and/or trapping moving smaller particles. Overall, step height, step pool steepness, relative pool area and volume increased with discharge up to the threshold when the bed approached fully- mobilized conditions. For bed morphologies surpassing a minimum profile roughness, a stepped velocity- discharge relationship is evident, with sharp rises in velocity correlated with the disappearance of rollers in pools at flows approaching the formative discharge for each morphology. Flow resistance exhibits an opposite pattern, with drops in resistance being a function of the height of the drowned steps. Step formation seems to occur under a hydraulic regime different from the lower flows, because spill resistance begins below step-forming flows.

  12. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  13. The compensated Kalman filter.

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1972-01-01

    This paper introduces the compensated Kalman filter, a suboptimal state estimator which can be used to eliminate steady-state bias errors when it is used in conjunction with the mismatched steady-state (asymptotic) time-invariant Kalman-Bucy filter. The uncompensated mismatched steady state Kalman-Bucy filter exhibits bias errors whenever the nominal plant parameters used in the filter design are different from the actual plant parameters. The approach used relies on the utilization of the residual (innovations) process of the mismatched filter to estimate, via a Kalman-Bucy filter, the state estimation errors and subsequent improvements of the state estimate. The compensated Kalman filter augments the mismatched steady state Kalman-Bucy filrby the introduction of additional dynamics and feedforward integral compensation channels.

  14. A superior edge preserving filter with a systematic analysis

    NASA Technical Reports Server (NTRS)

    Holladay, Kenneth W.; Rickman, Doug

    1991-01-01

    A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data.

  15. Particle Kalman Filtering: A Nonlinear Bayesian Framework for Ensemble Kalman Filters

    NASA Astrophysics Data System (ADS)

    Hoteit, I.; Luo, X.; Pham, D.

    2012-12-01

    This contribution discusses a discrete scheme of the optimal nonlinear Bayesian filter based on the Gaussian mixture representation of the state probability distribution function. The resulting filter is similar to the particle filter, but is different from it in that the standard weight-type correction in the particle filter is complemented by the Kalman-type correction with the associated covariance matrices in the Gaussian mixture. It is therefore referred to as the particle Kalman filter (PKF). In the PKF, the solution of a nonlinear filtering problem is expressed as the weighted average of an ''ensemble of Kalman filters'' operating in parallel. Running an ensemble of Kalman filters is, however, computationally prohibitive for realistic atmospheric and oceanic data assimilation problems. The PKF is then implemented through an ''ensemble'' of ensemble Kalman filters (EnKFs), and we refer to this implementation as the particle EnKF (PEnKF). We also discuss how the different types of the EnKFs can be considered as special cases of the PEnKF. Numerical experiments with the strongly nonlinear Lorenz-96 model will be presented.

  16. Optical amplification and optical filter based signal processing for cost and energy efficient spatial multiplexing.

    PubMed

    Krummrich, Peter M

    2011-08-15

    Spatial division multiplexing has been proposed as an option for further capacity increase of transmission fibers. Application of this concept is attractive only, if cost and energy efficient implementations can be found. In this work, optical amplification and optical filter based signal processing concepts are investigated. Deployment of multi mode fibers as the waveguide type for erbium doped fiber amplifiers potentially offers cost and energy efficiency advantages compared to using multi core fibers in preamplifier as well as booster stages. Additional advantages can be gained from optimization of the amplifier module design. Together with transponder design optimizations, they can increase the attractiveness of inverse spatial multiplexing, which is proposed as an intermediate step. Signal processing based on adaptive passive optical filters offers an alternative approach for the separation of channels at the receiver which have experienced mode coupling along the link. With this optical filter based approach, fiber capacity can potentially be increased faster and more energy efficiently than with solutions relying solely on electronic signal processing. PMID:21935026

  17. Canonical Signed Digit Study. Part 2; FIR Digital Filter Simulation Results

    NASA Technical Reports Server (NTRS)

    Kim, Heechul

    1996-01-01

    Finite Impulse Response digital filter using Canonical Signed-Digit (CSD) number representation for the coefficients has been studied and its computer simulation results are presented here. Minimum Mean Square Error (MMSE) criterion is employed to optimize filter coefficients into the corresponding CSD numbers. To further improve coefficients optimization process, an extra non-zero bit is added for any filter coefficients exceeding 1/2. This technique improves frequency response of filter without increasing filter complexity almost at all. The simulation results show outstanding performance in bit-error-rate (BER) curve for all CSD implemented digital filters included in this presentation material.

  18. Design of optical bandpass filters based on a two-material multilayer structure.

    PubMed

    Belyaev, B A; Tyurnev, V V; Shabanov, V F

    2014-06-15

    An easy method for designing filters with equalized passband ripples of a given magnitude is proposed. The filter, which is made of two dielectric materials, comprises coupled half-wavelength resonators and multilayer mirrors. The filter design begins with the synthesis of the multimaterial filter prototype whose mirrors consist of quarter-wavelength layers. Optimal refractive indices of the layers in the prototype are obtained by a special optimization based on universal rules. The thicknesses of the mirrors' layers in the final filter are computed using derived formulas. A design procedure example for silicon-air bandpass filters with a fractional bandwidth of 1% is described. PMID:24978524

  19. The importance of time-stepping errors in ocean models

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2011-12-01

    Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.

  20. Precise dispersion equations of absorbing filter glasses

    NASA Astrophysics Data System (ADS)

    Reichel, S.; Biertümpfel, Ralf

    2014-05-01

    The refractive indices versus wavelength of optical transparent glasses are measured at a few wavelengths only. In order to calculate the refractive index at any wavelength, a so-called Sellmeier series is used as an approximation of the wavelength dependent refractive index. Such a Sellmeier representation assumes an absorbing free (= loss less) material. In optical transparent glasses this assumption is valid since the absorption of such transparent glasses is very low. However, optical filter glasses have often a rather high absorbance in certain regions of the spectrum. The exact description of the wavelength dependent function of the refractive index is essential for an optimized design for sophisticated optical applications. Digital cameras use an IR cut filter to ensure good color rendition and image quality. In order to reduce ghost images by reflections and to be nearly angle independent absorbing filter glass is used, e.g. blue glass BG60 from SCHOTT. Nowadays digital cameras improve their performance and so the IR cut filter needs to be improved and thus the accurate knowledge of the refractive index (dispersion) of the used glasses must be known. But absorbing filter glass is not loss less as needed for a Sellmeier representation. In addition it is very difficult to measure it in the absorption region of the filter glass. We have focused a lot of effort on measuring the refractive index at specific wavelength for absorbing filter glass - even in the absorption region. It will be described how to do such a measurement. In addition we estimate the use of a Sellmeier representation for filter glasses. It turns out that in most cases a Sellmeier representation can be used even for absorbing filter glasses. Finally Sellmeier coefficients for the approximation of the refractive index will be given for different filter glasses.

  1. Compact planar microwave blocking filters

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop (Inventor); Wollack, Edward J. (Inventor)

    2012-01-01

    A compact planar microwave blocking filter includes a dielectric substrate and a plurality of filter unit elements disposed on the substrate. The filter unit elements are interconnected in a symmetrical series cascade with filter unit elements being organized in the series based on physical size. In the filter, a first filter unit element of the plurality of filter unit elements includes a low impedance open-ended line configured to reduce the shunt capacitance of the filter.

  2. Anti-resonance mixing filter

    NASA Technical Reports Server (NTRS)

    Evans, Paul S. (Inventor)

    2001-01-01

    In a closed loop control system that governs the movement of an actuator a filter is provided that attenuates the oscillations generated by the actuator when the actuator is at a resonant frequency. The filter is preferably coded into the control system and includes the following steps. Sensing the position of the actuator with an LVDT and sensing the motor position where motor drives the actuator through a gear train. When the actuator is at a resonant frequency, a lag is applied to the LVDT signal and then combined with the motor position signal to form a combined signal in which the oscillation generated by the actuator are attenuated. The control system then controls ion this combined signal. This arrangement prevents the amplified resonance present on the LVDT signal, from causing control instability, while retaining the steady state accuracy associated with the LVDT signal. It is also a characteristic of this arrangement that the signal attenuation will always coincide with the load resonance frequency of the system so that variations in the resonance frequency will not effectuate the effectiveness of the filter.

  3. Foam Filters Used in Gravity Casting

    NASA Astrophysics Data System (ADS)

    Hsu, Fu-Yuan; Lin, Huey-Jiuan

    2011-12-01

    Ceramic foam filters are normally used for reducing the velocity of liquid metal in the design of runner systems. In this study, four designs of the runner system with various orientations of foam filters were explored, and their effect on the velocity of the melt was estimated by a casting experiment and computational modeling. In the casting experiment, the free-fall trajectory and metal weighing methods were employed for measuring apparent velocity and flow rate, respectively. Using Forchheimer's equation, a porous material such as a foam filter was simulated. The modeling result was validated by the experiment. For the efficient use of a foam filter in a running system with a high flow rate but low exit velocity, an optimized design is recommended.

  4. An average-reward reinforcement learning algorithm for computing bias-optimal policies

    SciTech Connect

    Mahadevan, S.

    1996-12-31

    Average-reward reinforcement learning (ARL) is an undiscounted optimality framework that is generally applicable to a broad range of control tasks. ARL computes gain-optimal control policies that maximize the expected payoff per step. However, gain-optimality has some intrinsic limitations as an optimality criterion, since for example, it cannot distinguish between different policies that all reach an absorbing goal state, but incur varying costs. A more selective criterion is bias optimality, which can filter gain-optimal policies to select those that reach absorbing goals with the minimum cost. While several ARL algorithms for computing gain-optimal policies have been proposed, none of these algorithms can guarantee bias optimality, since this requires solving at least two nested optimality equations. In this paper, we describe a novel model-based ARL algorithm for computing bias-optimal policies. We test the proposed algorithm using an admission control queuing system, and show that it is able to utilize the queue much more efficiently than a gain-optimal method by learning bias-optimal policies.

  5. Electromechanical Frequency Filters

    NASA Astrophysics Data System (ADS)

    Wersing, W.; Lubitz, K.

    Frequency filters select signals with a frequency inside a definite frequency range or band from signals outside this band, traditionally afforded by a combination of L-C-resonators. The fundamental principle of all modern frequency filters is the constructive interference of travelling waves. If a filter is set up of coupled resonators, this interference occurs as a result of the successive wave reflection at the resonators' ends. In this case, the center frequency f c of a filter, e.g., set up of symmetrical ?/2-resonators of length 1, is given by f_c = f_r = v_{ph}/? = v_{ph}/2l , where v ph is the phase velocity of the wave. This clearly shows the big advantage of acoustic waves for filter applications in comparison to electro-magnetic waves. Because v ph of acoustic waves in solids is about 104-105 smaller than that of electro-magnetic waves, much smaller filters can be realised. Today, piezoelectric materials and processing technologies exist that electromechanical resonators and filters can be produced in the frequency range from 1 kHz up to 10 GHz. Further requirements for frequency filters such as low losses (high resonator Q) and low temperature coefficients of frequency constants can also be fulfilled with these filters. Important examples are quartz-crystal resonators and filters (1 kHz-200 MHz) as discussed in Chap. 2, electromechanical channel filters (50 kHz and 130 kHz) for long-haul communication systems as discussed in this section, surface acoustic wave (SAW) filters (20 MHz-5 GHz), as discussed in Chap. 14, and thin film bulk acoustic resonators (FBAR) and filters (500 MHz-10 GHz), as discussed in Chap. 15.

  6. Step-wise transient method

    NASA Astrophysics Data System (ADS)

    Malinarič, Svetozár

    2016-03-01

    The step-wise transient (SWT) method is an experimental technique for measuring the thermal diffusivity and conductivity of solid materials. A theoretical model, design of the experimental apparatus and sources of error are presented. Methods of experiment optimization and evaluation are illustrated by charts. The experiment is verified for polymethylmethacrylate (PMMA), yielding the thermal diffusivity 0.112 mm2 s‑1 and thermal conductivity 0.197 W.m‑1 K‑1 with the coefficient of variation around 0.7% for various values of input heat power and specimen thicknesses.

  7. Filtering separators having filter cleaning apparatus

    SciTech Connect

    Margraf, A.

    1984-08-28

    This invention relates to filtering separators of the kind having a housing which is subdivided by a partition, provided with parallel rows of holes or slots, into a dust-laden gas space for receiving filter elements positioned in parallel rows and being impinged upon by dust-laden gas from the outside towards the inside, and a clean gas space. In addition, the housing is provided with a chamber for cleansing the filter element surfaces of a row by counterflow action while covering at the same time the partition holes or slots leading to the adjacent rows of filter elements. The chamber is arranged for the supply of compressed air to at least one injector arranged to feed compressed air and secondary air to the row of filter elements to be cleansed. The chamber is also reciprocatingly displaceable along the partition in periodic and intermittent manner. According to the invention, a surface of the chamber facing towards the partition covers at least two of the rows of holes or slots of the partition, and the chamber is closed upon itself with respect to the clean gas space, and is connected to a compressed air reservoir via a distributor pipe and a control valve. At least one of the rows of holes or slots of the partition and the respective row of filter elements in flow communication therewith are in flow communication with the discharge side of at least one injector acted upon with compressed air. At least one other row of the rows of holes or slots of the partition and the respective row of filter elements is in flow communication with the suction side of the injector.

  8. Design-Filter Selection for H2 Control of Microgravity Isolation Systems: A Single-Degree-of-Freedom Case Study

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Whorton, Mark S.

    2000-01-01

    Many microgravity space-science experiments require active vibration isolation, to attain suitably low levels of background acceleration for useful experimental results. The design of state-space controllers by optimal control methods requires judicious choices of frequency-weighting design filters. Kinematic coupling among states greatly clouds designer intuition in the choices of these filters, and the masking effects of the state observations cloud the process further. Recent research into the practical application of H2 synthesis methods to such problems, indicates that certain steps can lead to state frequency-weighting design-filter choices with substantially improved promise of usefulness, even in the face of these difficulties. In choosing these filters on the states, one considers their relationships to corresponding design filters on appropriate pseudo-sensitivity- and pseudo-complementary-sensitivity functions. This paper investigates the application of these considerations to a single-degree-of-freedom microgravity vibration-isolation test case. Significant observations that were noted during the design process are presented. along with explanations based on the existent theory for such problems.

  9. A method for improving time-stepping numerics

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-04-01

    In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.

  10. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  11. Generic Kalman Filter Software

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E., II; Crues, Edwin Z.

    2005-01-01

    The Generic Kalman Filter (GKF) software provides a standard basis for the development of application-specific Kalman-filter programs. Historically, Kalman filters have been implemented by customized programs that must be written, coded, and debugged anew for each unique application, then tested and tuned with simulated or actual measurement data. Total development times for typical Kalman-filter application programs have ranged from months to weeks. The GKF software can simplify the development process and reduce the development time by eliminating the need to re-create the fundamental implementation of the Kalman filter for each new application. The GKF software is written in the ANSI C programming language. It contains a generic Kalman-filter-development directory that, in turn, contains a code for a generic Kalman filter function; more specifically, it contains a generically designed and generically coded implementation of linear, linearized, and extended Kalman filtering algorithms, including algorithms for state- and covariance-update and -propagation functions. The mathematical theory that underlies the algorithms is well known and has been reported extensively in the open technical literature. Also contained in the directory are a header file that defines generic Kalman-filter data structures and prototype functions and template versions of application-specific subfunction and calling navigation/estimation routine code and headers. Once the user has provided a calling routine and the required application-specific subfunctions, the application-specific Kalman-filter software can be compiled and executed immediately. During execution, the generic Kalman-filter function is called from a higher-level navigation or estimation routine that preprocesses measurement data and post-processes output data. The generic Kalman-filter function uses the aforementioned data structures and five implementation- specific subfunctions, which have been developed by the user on the basis of the aforementioned templates. The GKF software can be used to develop many different types of unfactorized Kalman filters. A developer can choose to implement either a linearized or an extended Kalman filter algorithm, without having to modify the GKF software. Control dynamics can be taken into account or neglected in the filter-dynamics model. Filter programs developed by use of the GKF software can be made to propagate equations of motion for linear or nonlinear dynamical systems that are deterministic or stochastic. In addition, filter programs can be made to operate in user-selectable "covariance analysis" and "propagation-only" modes that are useful in design and development stages.

  12. Concentric Split Flow Filter

    NASA Technical Reports Server (NTRS)

    Stapleton, Thomas J. (Inventor)

    2015-01-01

    A concentric split flow filter may be configured to remove odor and/or bacteria from pumped air used to collect urine and fecal waste products. For instance, filter may be designed to effectively fill the volume that was previously considered wasted surrounding the transport tube of a waste management system. The concentric split flow filter may be configured to split the air flow, with substantially half of the air flow to be treated traveling through a first bed of filter media and substantially the other half of the air flow to be treated traveling through the second bed of filter media. This split flow design reduces the air velocity by 50%. In this way, the pressure drop of filter may be reduced by as much as a factor of 4 as compare to the conventional design.

  13. Optically tunable optical filter

    NASA Astrophysics Data System (ADS)

    James, Robert T. B.; Wah, Christopher; Iizuka, Keigo; Shimotahira, Hiroshi

    1995-12-01

    We experimentally demonstrate an optically tunable optical filter that uses photorefractive barium titanate. With our filter we implement a spectrum analyzer at 632.8 nm with a resolution of 1.2 nm. We simulate a wavelength-division multiplexing system by separating two semiconductor laser diodes, at 1560 nm and 1578 nm, with the same filter. The filter has a bandwidth of 6.9 nm. We also use the same filter to take 2.5-nm-wide slices out of a 20-nm-wide superluminescent diode centered at 840 nm. As a result, we experimentally demonstrate a phenomenal tuning range from 632.8 to 1578 nm with a single filtering device.

  14. Contactor/filter improvements

    DOEpatents

    Stelman, D.

    1988-06-30

    A contactor/filter arrangement for removing particulate contaminants from a gaseous stream is described. The filter includes a housing having a substantially vertically oriented granular material retention member with upstream and downstream faces, a substantially vertically oriented microporous gas filter element, wherein the retention member and the filter element are spaced apart to provide a zone for the passage of granular material therethrough. A gaseous stream containing particulate contaminants passes through the gas inlet means as well as through the upstream face of the granular material retention member, passing through the retention member, the body of granular material, the microporous gas filter element, exiting out of the gas outlet means. A cover screen isolates the filter element from contact with the moving granular bed. In one embodiment, the granular material is comprised of porous alumina impregnated with CuO, with the cover screen cleaned by the action of the moving granular material as well as by backflow pressure pulses. 6 figs.

  15. STEP Technology Development

    NASA Astrophysics Data System (ADS)

    Torii, R.; Step Team

    STEP (Satellite Test of the Equivalence Principle) is a space experiment to test the Equivalence Principle to one part in 1018 by comparing the rates of fall of four test mass pairs in Earth orbit. The STEP instrument supports four differential accelerometers, operated simultaneously to maximize the quality and quantity of data. The instrument is inserted into a Dewar of liquid helium at a nominal temperature of 1.8 K. Aerogel, a low density porous glass, is placed in the liquid helium Dewar to reduce helium mass motion. Recent NASA funding has enabled the STEP team at Stanford to continue hardware development and advance STEP technology. We focus near term on the development of critical flight technologies needed to prototype STEP payload hardware: accelerometer, probe, and Dewar. We will present our most recent progress in STEP technology development and our future plans to bring all our key technologies to full maturity.

  16. Thermal control design of the Lightning Mapper Sensor narrow-band spectral filter

    NASA Technical Reports Server (NTRS)

    Flannery, Martin R.; Potter, John; Raab, Jeff R.; Manlief, Scott K.

    1992-01-01

    The performance of the Lightning Mapper Sensor is dependent on the temperature shifts of its narrowband spectral filter. To perform over a 10 degree FOV with an 0.8 nm bandwidth, the filter must be 15 cm in diameter and mounted externally to the telescope optics. The filter thermal control required a filter design optimized for minimum bandpass shift with temperature, a thermal analysis of substrate materials for maximum temperature uniformity, and a thermal radiation analysis to determine the parameter sensitivity of the radiation shield for the filter, the filter thermal recovery time after occultation, and heater power to maintain filter performance in the earth-staring geosynchronous environment.

  17. Example-based automatic generation of image filters and classifiers based on image-value pairs

    NASA Astrophysics Data System (ADS)

    Doi, Munehiro; Dobashi, Yoshinori; Tamori, Hideaki; Yamamoto, Tsuyoshi

    2015-03-01

    We propose a novel method for the automatic generation of the spatial image filter sequences based on Genetic Programming (GP). In this method, the filter sequences consist of the filters which process Image-Value Pairs. This idea allows the filter sequences to contain not only image processing, but also numerical operations. And we exploit the popular method of the multi-objective optimization to generate the robust filter sequences. We demonstrate the generation of the background elimination filter from the pictures of flowers and also demonstrate the generation of the image classification filters.

  18. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  19. Performance analysis of ?- ?- ?tracking filters using position and velocity measurements

    NASA Astrophysics Data System (ADS)

    Saho, Kenshi; Masugi, Masao

    2015-12-01

    This paper examines the performance of two position-velocity-measured (PVM) ?- ?- ? tracking filters. The first estimates the target acceleration using the measured velocity, and the second, which is proposed for the first time in this paper, estimates acceleration using the measured position. To quantify the performance of these PVM ?- ?- ? filters, we analytically derive steady-state errors that assume that the target is moving with constant acceleration or jerk. With these performance indices, the optimal gains of the PVM ?- ?- ? filters are determined using a minimum-variance filter criterion. The performance of each filter under these optimal gains is then analyzed and compared. Numerical analyses clarify the performance of the PVM ?- ?- ? filters and verify that their accuracy is better than that of the general position-only-measured ?- ?- ? filter, even when the variance in velocity measurement noise is comparatively large. We identify the conditions under which the proposed PVM ?- ?- ? filter outperforms the general ?- ?- ? filter for different ratios of noise variance in the velocity and position measurements. Finally, numerical simulations verify the effectiveness of the PVM ?- ?- ? filters for a realistic maneuvering target.

  20. Filter vapor trap

    DOEpatents

    Guon, Jerold

    1976-04-13

    A sintered filter trap is adapted for insertion in a gas stream of sodium vapor to condense and deposit sodium thereon. The filter is heated and operated above the melting temperature of sodium, resulting in a more efficient means to remove sodium particulates from the effluent inert gas emanating from the surface of a liquid sodium pool. Preferably the filter leaves are precoated with a natrophobic coating such as tetracosane.

  1. Practical alarm filtering

    SciTech Connect

    Bray, M.; Corsberg, D. )

    1994-02-01

    An expert system-based alarm filtering method is described which prioritizes and reduces the number of alarms facing an operator. This patented alarm filtering methodology was originally developed and implemented in a pressurized water reactor, and subsequently in a chemical processing facility. Both applications were in LISP and both were successful. In the chemical processing facility, for instance, alarm filtering reduced the quantity of alarm messages by 90%. 6 figs.

  2. Hybrid Filter Membrane

    NASA Technical Reports Server (NTRS)

    Laicer, Castro; Rasimick, Brian; Green, Zachary

    2012-01-01

    Cabin environmental control is an important issue for a successful Moon mission. Due to the unique environment of the Moon, lunar dust control is one of the main problems that significantly diminishes the air quality inside spacecraft cabins. Therefore, this innovation was motivated by NASA s need to minimize the negative health impact that air-suspended lunar dust particles have on astronauts in spacecraft cabins. It is based on fabrication of a hybrid filter comprising nanofiber nonwoven layers coated on porous polymer membranes with uniform cylindrical pores. This design results in a high-efficiency gas particulate filter with low pressure drop and the ability to be easily regenerated to restore filtration performance. A hybrid filter was developed consisting of a porous membrane with uniform, micron-sized, cylindrical pore channels coated with a thin nanofiber layer. Compared to conventional filter media such as a high-efficiency particulate air (HEPA) filter, this filter is designed to provide high particle efficiency, low pressure drop, and the ability to be regenerated. These membranes have well-defined micron-sized pores and can be used independently as air filters with discreet particle size cut-off, or coated with nanofiber layers for filtration of ultrafine nanoscale particles. The filter consists of a thin design intended to facilitate filter regeneration by localized air pulsing. The two main features of this invention are the concept of combining a micro-engineered straight-pore membrane with nanofibers. The micro-engineered straight pore membrane can be prepared with extremely high precision. Because the resulting membrane pores are straight and not tortuous like those found in conventional filters, the pressure drop across the filter is significantly reduced. The nanofiber layer is applied as a very thin coating to enhance filtration efficiency for fine nanoscale particles. Additionally, the thin nanofiber coating is designed to promote capture of dust particles on the filter surface and to facilitate dust removal with pulse or back airflow.

  3. Nanofiber Filters Eliminate Contaminants

    NASA Technical Reports Server (NTRS)

    2009-01-01

    With support from Phase I and II SBIR funding from Johnson Space Center, Argonide Corporation of Sanford, Florida tested and developed its proprietary nanofiber water filter media. Capable of removing more than 99.99 percent of dangerous particles like bacteria, viruses, and parasites, the media was incorporated into the company's commercial NanoCeram water filter, an inductee into the Space Foundation's Space Technology Hall of Fame. In addition to its drinking water filters, Argonide now produces large-scale nanofiber filters used as part of the reverse osmosis process for industrial water purification.

  4. Independent task Fourier filters

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    2001-11-01

    Since the early 1960s, a major part of optical computing systems has been Fourier pattern recognition, which takes advantage of high speed filter changes to enable powerful nonlinear discrimination in `real time.' Because filter has a task quite independent of the tasks of the other filters, they can be applied and evaluated in parallel or, in a simple approach I describe, in sequence very rapidly. Thus I use the name ITFF (independent task Fourier filter). These filters can also break very complex discrimination tasks into easily handled parts, so the wonderful space invariance properties of Fourier filtering need not be sacrificed to achieve high discrimination and good generalizability even for ultracomplex discrimination problems. The training procedure proceeds sequentially, as the task for a given filter is defined a posteriori by declaring it to be the discrimination of particular members of set A from all members of set B with sufficient margin. That is, we set the threshold to achieve the desired margin and note the A members discriminated by that threshold. Discriminating those A members from all members of B becomes the task of that filter. Those A members are then removed from the set A, so no other filter will be asked to perform that already accomplished task.

  5. Birefringent filter design

    NASA Technical Reports Server (NTRS)

    Bair, Clayton H. (Inventor)

    1991-01-01

    A birefringent filter is provided for tuning the wavelength of a broad band emission laser. The filter comprises thin plates of a birefringent material having thicknesses which are non-unity, integral multiples of the difference between the thicknesses of the two thinnest plates. The resulting wavelength selectivity is substantially equivalent to the wavelength selectivity of a conventional filter which has a thinnest plate having a thickness equal to this thickness difference. The present invention obtains an acceptable tuning of the wavelength while avoiding a decrease in optical quality associated with conventional filters wherein the respective plate thicknesses are integral multiples of the thinnest plate.

  6. Linear phase compressive filter

    DOEpatents

    McEwan, T.E.

    1995-06-06

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.

  7. Linear phase compressive filter

    DOEpatents

    McEwan, Thomas E. (Livermore, CA)

    1995-01-01

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.

  8. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers.

    PubMed

    Buyel, Johannes F; Gruchow, Hannah M; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m(-2) when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre-coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m(-2) with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037

  9. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers

    PubMed Central

    Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m−2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m−2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037

  10. Aircraft Recirculation Filter for Air-Quality and Incident Assessment

    PubMed Central

    Eckels, Steven J.; Jones, Byron; Mann, Garrett; Mohan, Krishnan R.; Weisel, Clifford P.

    2015-01-01

    The current research examines the possibility of using recirculation filters from aircraft to document the nature of air-quality incidents on aircraft. These filters are highly effective at collecting solid and liquid particulates. Identification of engine oil contaminants arriving through the bleed air system on the filter was chosen as the initial focus. A two-step study was undertaken. First, a compressor/bleed air simulator was developed to simulate an engine oil leak, and samples were analyzed with gas chromatograph-mass spectrometry. These samples provided a concrete link between tricresyl phosphates and a homologous series of synthetic pentaerythritol esters from oil and contaminants found on the sample paper. The second step was to test 184 used aircraft filters with the same gas chromatograph-mass spectrometry system; of that total, 107 were standard filters, and 77 were nonstandard. Four of the standard filters had both markers for oil, with the homologous series synthetic pentaerythritol esters being the less common marker. It was also found that 90% of the filters had some detectable level of tricresyl phosphates. Of the 77 nonstandard filters, 30 had both markers for oil, a significantly higher percent than the standard filters. PMID:25641977

  11. Uneven-order decentered Shapiro filters for boundary filtering

    NASA Astrophysics Data System (ADS)

    Falissard, F.

    2015-07-01

    This paper addresses the use of Shapiro filters for boundary filtering. A new class of uneven-order decentered Shapiro filters is proposed and compared to classical Shapiro filters and even-order decentered Shapiro filters. The theoretical analysis shows that the proposed boundary filters are more accurate than the centered Shapiro filters and more robust than the even-order decentered boundary filters usable at the same distance to the boundary. The benefit of the new boundary filters is assessed for computations using the compressible Euler equations.

  12. The Twelve Steps Experientially.

    ERIC Educational Resources Information Center

    Horne, Lianne

    Experiential activities provide each participant with the ability to see, feel, and experience whatever therapeutic issue the facilitator is addressing, and usually much more. This paper presents experiential activities to address the 12 steps of recovery adopted from Alcoholics Anonymous. These 12 steps are used worldwide for many other recovery…

  13. Steps in Test Construction.

    ERIC Educational Resources Information Center

    Tanguma, Jesus

    This paper addresses four steps in test construction specification: (1) the purpose of the test; (2) the content of the test; (3) the format of the test; and (4) the pool of items. If followed, such steps not only will assist the test constructor but will also enhance the students' learning. Within the "Content of the Test" section, two examples…

  14. Filter holder and gasket assembly for candle or tube filters

    DOEpatents

    Lippert, T.E.; Alvin, M.A.; Bruck, G.J.; Smeltzer, E.E.

    1999-03-02

    A filter holder and gasket assembly are disclosed for holding a candle filter element within a hot gas cleanup system pressure vessel. The filter holder and gasket assembly includes a filter housing, an annular spacer ring securely attached within the filter housing, a gasket sock, a top gasket, a middle gasket and a cast nut. 9 figs.

  15. Filter holder and gasket assembly for candle or tube filters

    DOEpatents

    Lippert, Thomas Edwin (Murrysville, PA); Alvin, Mary Anne (Pittsburgh, PA); Bruck, Gerald Joseph (Murrysville, PA); Smeltzer, Eugene E. (Export, PA)

    1999-03-02

    A filter holder and gasket assembly for holding a candle filter element within a hot gas cleanup system pressure vessel. The filter holder and gasket assembly includes a filter housing, an annular spacer ring securely attached within the filter housing, a gasket sock, a top gasket, a middle gasket and a cast nut.

  16. STEP Experiment Requirements

    NASA Technical Reports Server (NTRS)

    Brumfield, M. L. (Compiler)

    1984-01-01

    A plan to develop a space technology experiments platform (STEP) was examined. NASA Langley Research Center held a STEP Experiment Requirements Workshop on June 29 and 30 and July 1, 1983, at which experiment proposers were invited to present more detailed information on their experiment concept and requirements. A feasibility and preliminary definition study was conducted and the preliminary definition of STEP capabilities and experiment concepts and expected requirements for support services are presented. The preliminary definition of STEP capabilities based on detailed review of potential experiment requirements is investigated. Topics discussed include: Shuttle on-orbit dynamics; effects of the space environment on damping materials; erectable beam experiment; technology for development of very large solar array deployers; thermal energy management process experiment; photovoltaic concentrater pointing dynamics and plasma interactions; vibration isolation technology; flight tests of a synthetic aperture radar antenna with use of STEP.

  17. Fouling of ceramic filters and thin-film composite reverse osmosis membranes by inorganic and bacteriological constituents

    SciTech Connect

    Siler, J.L.; Poirier, M.R.; McCabe, D.J.; Hazen, T.C.

    1991-01-01

    Two significant problems have been identified during the first three years of operating the Savannah River Site Effluent Treatment Facility. These problems encompass two of the facility's major processing areas: the microfiltration and reverse osmosis steps. The microfilters (crossflow ceramic filters {minus}0.2{mu} nominal pore size) have been prone to pluggage problems. The presence of bacteria and bacteria byproducts in the microfilter feed, along with small quantities of colloidal iron, silica, and aluminum, results in a filter foulant that rapidly deteriorates filter performance and is difficult to remove by chemical cleaning. Processing rates through the filters have dropped from the design flow rate of 300 gpm after cleaning to 60 gpm within minutes. The combination of bacteria (from internal sources) and low concentrations of inorganic species resulted in substantial reductions in the reverse osmosis system performance. The salt rejection has been found to decrease from 99+% to 97%, along with a 50% loss in throughput, within a few hours of cleaning. Experimental work has led to implementation of several changes to plant operation and to planned upgrades of existing equipment. It has been shown that biological control in the influent is necessary to achieve design flowrates. Experiments have also shown that the filter performance can be optimized by the use of efficient filter backpulsing and the addition of aluminum nitrate (15 to 30 mg/L Al{sup 3+}) to the filter feed. The aluminum nitrate assists by controlling adsorption of colloidal inorganic precipitates and biological contaminants. In addition, improved cleaning procedures have been identified for the reverse osmosis units. This paper provides a summary of the plant problems and the experimental work that has been completed to understand and correct these problems.

  18. Fouling of ceramic filters and thin-film composite reverse osmosis membranes by inorganic and bacteriological constituents

    SciTech Connect

    Siler, J.L.; Poirier, M.R.; McCabe, D.J.; Hazen, T.C.

    1991-12-31

    Two significant problems have been identified during the first three years of operating the Savannah River Site Effluent Treatment Facility. These problems encompass two of the facility`s major processing areas: the microfiltration and reverse osmosis steps. The microfilters (crossflow ceramic filters {minus}0.2{mu} nominal pore size) have been prone to pluggage problems. The presence of bacteria and bacteria byproducts in the microfilter feed, along with small quantities of colloidal iron, silica, and aluminum, results in a filter foulant that rapidly deteriorates filter performance and is difficult to remove by chemical cleaning. Processing rates through the filters have dropped from the design flow rate of 300 gpm after cleaning to 60 gpm within minutes. The combination of bacteria (from internal sources) and low concentrations of inorganic species resulted in substantial reductions in the reverse osmosis system performance. The salt rejection has been found to decrease from 99+% to 97%, along with a 50% loss in throughput, within a few hours of cleaning. Experimental work has led to implementation of several changes to plant operation and to planned upgrades of existing equipment. It has been shown that biological control in the influent is necessary to achieve design flowrates. Experiments have also shown that the filter performance can be optimized by the use of efficient filter backpulsing and the addition of aluminum nitrate (15 to 30 mg/L Al{sup 3+}) to the filter feed. The aluminum nitrate assists by controlling adsorption of colloidal inorganic precipitates and biological contaminants. In addition, improved cleaning procedures have been identified for the reverse osmosis units. This paper provides a summary of the plant problems and the experimental work that has been completed to understand and correct these problems.

  19. Dynamic interaction of separate INS/GPS Kalman filters (Filter-driving - Filter dynamics)

    NASA Astrophysics Data System (ADS)

    Cunningham, Joseph R.; Lewantowicz, Zdzislaw H.

    This paper examines the basic behavior of the inertial navigation system (INS) errors under high-dynamic conditions, such as combat maneuvering of a fighter aircraft. Examination of the INS linearized error dynamics eigenvalue migrations during various dynamic maneuvers reveals significantly stronger instability than the classical vertical channel instability. The Global Positioning System (GPS) offers high accuracy navigation performance benefits with global coverage, but is has limitations during the dynamic maneuvering of fighter aircraft. The optimal integration of the GPS with an INS promises synergistic system performance benefits not realizable with either system individually. A candidate maneuver is selected for which a covariance analysis is performed to demonstrate the performance characteristics of an optimally integrated INS/GPS system. Significant insight into the potential instability of a particular INS/GPS integration scheme is presented. The characteristic and unstable behavior of the INS error dynamics eigenfunctions provides the basis for the 'filter-driving-filter' performance analysis. Several phenomena were observed which should aid future efforts attempting to characterize the performance of various INS/GPS integration methods.

  20. Federated filter for multiplatform track fusion

    NASA Astrophysics Data System (ADS)

    Carlson, Neal A.

    1999-10-01

    The federated filter is a near globally optimal distributed estimation method based on rigorous information-sharing principles. It is applied here to multi-perform target tracking systems where platform-level target tracks are fused across platforms into global tracks. Global track accuracy is enhanced by the geometric diversity of measurements from different platforms, in addition to the greater number of measurements. On each platform, the federated filter employs dual platform-level filters (PFs) for each track. The primary PFs are locally optimal, and contain all the information gathered from the platform track sensors. The secondary PFs are identical except that they contain only the incremental track information gained since the last fusion cycle. On each platform, global track solutions are near globally optimal because they receive only new tracklet information from the onboard and off-board PFs, and don to re-use old platform-level information. Logistically, platforms can operate autonomously with no need for synchronized operations or master/slave designations; the architecture is completely symmetric. Platforms can enter or leave the group with no changes in other global trackers. Communications bandwidth is minimal because global tracks need not be shared. The paper describes the theoretical basis of the federated fusing filter, the related data association functions, and preliminary simulation results.