Science.gov

Sample records for optimized filtering step

  1. STEPS: A Grid Search Methodology for Optimized Peptide Identification Filtering of MS/MS Database Search Results

    SciTech Connect

    Piehowski, Paul D.; Petyuk, Vladislav A.; Sandoval, John D.; Burnum, Kristin E.; Kiebel, Gary R.; Monroe, Matthew E.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2013-03-01

    For bottom-up proteomics there are a wide variety of database searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection - referred to as STEPS - utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.

  2. Nonlinear optimal semirecursive filtering

    NASA Astrophysics Data System (ADS)

    Daum, Frederick E.

    1996-05-01

    This paper describes a new hybrid approach to filtering, in which part of the filter is recursive but another part in non-recursive. The practical utility of this notion is to reduce computational complexity. In particular, if the non- recursive part of the filter is sufficiently small, then such a filter might be cost-effective to run in real-time with computer technology available now or in the future.

  3. Optimization of integrated polarization filters.

    PubMed

    Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J

    2014-10-01

    This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics. PMID:25360980

  4. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

    1998-04-30

    Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based

  5. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

    2002-06-30

    Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program

  6. Optimal rate filters for biomedical point processes.

    PubMed

    McNames, James

    2005-01-01

    Rate filters are used to estimate the mean event rate of many biomedical signals that can be modeled as point processes. Historically these filters have been designed using principles from two distinct fields. Signal processing principles are used to optimize the filter's frequency response. Kernel estimation principles are typically used to optimize the asymptotic statistical properties. This paper describes a design methodology that combines these principles from both fields to optimize the frequency response subject to constraints on the filter's order, symmetry, time-domain ripple, DC gain, and minimum impulse response. Initial results suggest that time-domain ripple and a negative impulse response are necessary to design a filter with a reasonable frequency response. This suggests that some of the common assumptions about the properties of rate filters should be reconsidered. PMID:17282132

  7. Adaptive Mallow's optimization for weighted median filters

    NASA Astrophysics Data System (ADS)

    Rachuri, Raghu; Rao, Sathyanarayana S.

    2002-05-01

    This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.

  8. Steps Toward Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2006-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a

  9. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108

  10. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  11. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  12. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the

  13. Improving particle filters in rainfall-runoff models: application of the resample-move step and development of the ensemble Gaussian particle filter

    NASA Astrophysics Data System (ADS)

    Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.

    2012-12-01

    The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.

  14. GNSS data filtering optimization for ionospheric observation

    NASA Astrophysics Data System (ADS)

    D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.

    2015-12-01

    In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15°, 20° or 30°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy

  15. Constrained filter optimization for subsurface landmine detection

    NASA Astrophysics Data System (ADS)

    Torrione, Peter A.; Collins, Leslie; Clodfelter, Fred; Lulich, Dan; Patrikar, Ajay; Howard, Peter; Weaver, Richard; Rosen, Erik

    2006-05-01

    Previous large-scale blind tests of anti-tank landmine detection utilizing the NIITEK ground penetrating radar indicated the potential for very high anti-tank landmine detection probabilities at very low false alarm rates for algorithms based on adaptive background cancellation schemes. Recent data collections under more heterogeneous multi-layered road-scenarios seem to indicate that although adaptive solutions to background cancellation are effective, the adaptive solutions to background cancellation under different road conditions can differ significantly, and misapplication of these adaptive solutions can reduce landmine detection performance in terms of PD/FAR. In this work we present a framework for the constrained optimization of background-estimation filters that specifically seeks to optimize PD/FAR performance as measured by the area under the ROC curve between two FARs. We also consider the application of genetic algorithms to the problem of filter optimization for landmine detection. Results indicate robust results for both static and adaptive background cancellation schemes, and possible real-world advantages and disadvantages of static and adaptive approaches are discussed.

  16. On optimal filtering of measured Mueller matrices

    NASA Astrophysics Data System (ADS)

    Gil, José J.

    2016-07-01

    While any two-dimensional mixed state of polarization of light can be represented by a combination of a pure state and a fully random state, any Mueller matrix can be represented by a convex combination of a pure component and three additional components whose randomness is scaled in a proper and objective way. Such characteristic decomposition constitutes the appropriate framework for the characterization of the polarimetric randomness of the system represented by a given Mueller matrix, and provides criteria for the optimal filtering of noise in experimental polarimetry.

  17. Optimal edge filters explain human blur detection.

    PubMed

    McIlhagga, William H; May, Keith A

    2012-01-01

    Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222

  18. Optimization of phononic filters via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Hussein, M. I.; El-Beltagy, M. A.

    2007-12-01

    A phononic crystal is commonly characterized by its dispersive frequency spectrum. With appropriate spatial distribution of the constituent material phases, spectral stop bands could be generated. Moreover, it is possible to control the number, the width, and the location of these bands within a frequency range of interest. This study aims at exploring the relationship between unit cell configuration and frequency spectrum characteristics. Focusing on 1D layered phononic crystals, and longitudinal wave propagation in the direction normal to the layering, the unit cell features of interest are the number of layers and the material phase and relative thickness of each layer. An evolutionary search for binary- and ternary-phase cell designs exhibiting a series of stop bands at predetermined frequencies is conducted. A specially formulated representation and set of genetic operators that break the symmetries in the problem are developed for this purpose. An array of optimal designs for a range of ratios in Young's modulus and density are obtained and the corresponding objective values (the degrees to which the resulting bands match the predetermined targets) are examined as a function of these ratios. It is shown that a rather complex filtering objective could be met with a high degree of success. Structures composed of the designed phononic crystals are excellent candidates for use in a wide range of applications including sound and vibration filtering.

  19. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-12-31

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  20. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-01-01

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  1. An Adaptive Fourier Filter for Relaxing Time Stepping Constraints for Explicit Solvers

    SciTech Connect

    Gelb, Anne; Archibald, Richard K

    2015-01-01

    Filtering is necessary to stabilize piecewise smooth solutions. The resulting diffusion stabilizes the method, but may fail to resolve the solution near discontinuities. Moreover, high order filtering still requires cost prohibitive time stepping. This paper introduces an adaptive filter that controls spurious modes of the solution, but is not unnecessarily diffusive. Consequently we are able to stabilize the solution with larger time steps, but also take advantage of the accuracy of a high order filter.

  2. Geomagnetic field modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Five individual 5 year mini-batch geomagnetic models were generated and two computer programs were developed to process the models. The first program computes statistics (mean sigma, weighted sigma) on the changes in the first derivatives (linear terms) of the spherical harmonic coefficients between mini-batches. The program ran successfully. The statistics are intended for use in computing the state noise matrix required in the information filter. The second program is the information filter. Most subroutines used in the filter were tested, but the coefficient statistics must be analyzed before the filter is run.

  3. Illumination system design with multi-step optimization

    NASA Astrophysics Data System (ADS)

    Magarill, Simon; Cassarly, William J.

    2015-08-01

    Automatic optimization algorithms can be used when designing illumination systems. For systems with many design variables, optimization using an adjustable set of variables at different steps of the process can provide different local minima. We present a few examples of implementing a multi-step optimization method. We have found that this approach can sometimes lead to more efficient solutions. In this paper we illustrate the effectiveness of using a commercially available optimization algorithm with a slightly modified procedure.

  4. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  5. Optimal filter bandwidth for pulse oximetry

    NASA Astrophysics Data System (ADS)

    Stuban, Norbert; Niwayama, Masatsugu

    2012-10-01

    Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.

  6. Novel Compact Ultra-Wideband Bandpass Filter by Application of Short-Circuited Stubs and Stepped-Impedance-Resonator

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Ping; Ma, Zhewang; Anada, Tetsuo

    To realize the compact ultra-wideband (UWB) bandpass filters, a novel filter prototype with two short-circuited stubs loaded at both sides of a stepped-impedance resonator (SIR) via the parallel coupled lines is proposed based on a distributed filter synthesis theory. The equivalent circuit of this filter is established, while the corresponding 7-pole Chebyshev-type transfer function is derived for filter synthesis. Then, a distributed-circuit-based technique was presented to synthesize the elements' values of this filter. As an example, a FCC UWB filter with the fractional bandwidth (FWB) @ -10dB up to 110% was designed using the proposed prototype and then re-modeled by commercial microwave circuit simulator to verify the correctness and accuracy of the synthesis theory. Furthermore, in terms of EM simulator, the filter was further-optimized and experimentally-realized by using microstrip line. Good agreements between the measurement results and theoretical ones validate the effectiveness of our technique. In addition, compared with the conventional SIR-type UWB filter without short-circuited stubs, the new one significantly improves the selectivity and out-of-band characteristics (especially in lower one -45dB@1-2GHz) to satisfy the FCC's spectrum mask. The designed filter also exhibits very compact size, quite low insertion loss, steep skirts, flat group delay and the easily-fabricatable structure (the coupling gap dimension in this filter is 0.15mm) as well. Moreover, it should be noted that, in terms of the presented design technique, the proposed filter prototype can be also used to easily realize the UWB filters with other FBW even greater than 110%.

  7. Optimal Gain Filter Design for Perceptual Acoustic Echo Suppressor

    NASA Astrophysics Data System (ADS)

    Kim, Kihyeon; Ko, Hanseok

    This Letter proposes an optimal gain filter for the perceptual acoustic echo suppressor. We designed an optimally-modified log-spectral amplitude estimation algorithm for the gain filter in order to achieve robust suppression of echo and noise. A new parameter including information about interferences (echo and noise) of single-talk duration is statistically analyzed, and then the speech absence probability and the a posteriori SNR are judiciously estimated to determine the optimal solution. The experiments show that the proposed gain filter attains a significantly improved reduction of echo and noise with less speech distortion.

  8. Entropy-based optimization of wavelet spatial filters.

    PubMed

    Farina, Darino; Kamavuako, Ernest Nlandu; Wu, Jian; Naddeo, Francesco

    2008-03-01

    A new class of spatial filters for surface electromyographic (EMG) signal detection is proposed. These filters are based on the 2-D spatial wavelet decomposition of the surface EMG recorded with a grid of electrodes and inverse transformation after zeroing a subset of the transformation coefficients. The filter transfer function depends on the selected mother wavelet in the two spatial directions. Wavelet parameterization is proposed with the aim of signal-based optimization of the transfer function of the spatial filter. The optimization criterion was the minimization of the entropy of the time samples of the output signal. The optimized spatial filter is linear and space invariant. In simulated and experimental recordings, the optimized wavelet filter showed increased selectivity with respect to previously proposed filters. For example, in simulation, the ratio between the peak-to-peak amplitude of action potentials generated by motor units 20 degrees apart in the transversal direction was 8.58% (with monopolar recording), 2.47% (double differential), 2.59% (normal double differential), and 0.47% (optimized wavelet filter). In experimental recordings, the duration of the detected action potentials decreased from (mean +/- SD) 6.9 +/- 0.3 ms (monopolar recording), to 4.5 +/- 0.2 ms (normal double differential), 3.7 +/- 0.2 (double differential), and 3.0 +/- 0.1 ms (optimized wavelet filter). In conclusion, the new class of spatial filters with the proposed signal-based optimization of the transfer function allows better discrimination of individual motor unit activities in surface EMG recordings than it was previously possible. PMID:18334382

  9. Optimization-based tuning of LPV fault detection filters for civil transport aircraft

    NASA Astrophysics Data System (ADS)

    Ossmann, D.; Varga, A.

    2013-12-01

    In this paper, a two-step optimal synthesis approach of robust fault detection (FD) filters for the model based diagnosis of sensor faults for an augmented civil aircraft is suggested. In the first step, a direct analytic synthesis of a linear parameter varying (LPV) FD filter is performed for the open-loop aircraft using an extension of the nullspace based synthesis method to LPV systems. In the second step, a multiobjective optimization problem is solved for the optimal tuning of the LPV detector parameters to ensure satisfactory FD performance for the augmented nonlinear closed-loop aircraft. Worst-case global search has been employed to assess the robustness of the fault detection system in the presence of aerodynamics uncertainties and estimation errors in the aircraft parameters. An application of the proposed method is presented for the detection of failures in the angle-of-attack sensor.

  10. Geomagnetic modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Estes, R. H.

    1981-01-01

    The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

  11. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  12. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  13. Laboratory experiment of a coronagraph based on step-transmission filters

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Zhu, Yongtian; Ren, Deqing; Zhang, Xi

    2008-07-01

    This paper presents the first results of a step-transmission-filter based coronagraph in the visible wavelengths. The primary goal of this work is to demonstrate the feasibility of the coronagraph that employs step-transmission filters, with a required contrast in the order of better than 10-5 at an angular distance larger than 4λ/D. Two 13-step-transmission filters were manufactured with 5% transmission accuracy. The precision of the transmitted wave distortion and the coating surface quality were not strictly controlled at this time. Although in perfect case the coronagraph can achieve theoretical contrast of 10-10, it only delivers 10-5 contrast because of the transmission error, poor surface quality and wave-front aberration stated above, which is in our estimation. Based on current techniques, step-transmission filters with better coating surface quality and high-precision transmission can be made. As a follow-up effort, high-quality step-transmission filters are being manufactured, which should deliver better performance. The step-transmission-filter based coronagraph has the potential applications for future high-contrast direct imaging of earth-like planets.

  14. Bayes optimal template matching for spike sorting - combining fisher discriminant analysis with optimal filtering.

    PubMed

    Franke, Felix; Quian Quiroga, Rodrigo; Hierlemann, Andreas; Obermayer, Klaus

    2015-06-01

    Spike sorting, i.e., the separation of the firing activity of different neurons from extracellular measurements, is a crucial but often error-prone step in the analysis of neuronal responses. Usually, three different problems have to be solved: the detection of spikes in the extracellular recordings, the estimation of the number of neurons and their prototypical (template) spike waveforms, and the assignment of individual spikes to those putative neurons. If the template spike waveforms are known, template matching can be used to solve the detection and classification problem. Here, we show that for the colored Gaussian noise case the optimal template matching is given by a form of linear filtering, which can be derived via linear discriminant analysis. This provides a Bayesian interpretation for the well-known matched filter output. Moreover, with this approach it is possible to compute a spike detection threshold analytically. The method can be implemented by a linear filter bank derived from the templates, and can be used for online spike sorting of multielectrode recordings. It may also be applicable to detection and classification problems of transient signals in general. Its application significantly decreases the error rate on two publicly available spike-sorting benchmark data sets in comparison to state-of-the-art template matching procedures. Finally, we explore the possibility to resolve overlapping spikes using the template matching outputs and show that they can be resolved with high accuracy. PMID:25652689

  15. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  16. Optimal filtering methods to structural damage estimation under ground excitation.

    PubMed

    Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

    2013-01-01

    This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

  17. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  18. Single-channel noise reduction using optimal rectangular filtering matrices.

    PubMed

    Long, Tao; Chen, Jingdong; Benesty, Jacob; Zhang, Zhenxi

    2013-02-01

    This paper studies the problem of single-channel noise reduction in the time domain and presents a block-based approach where a vector of the desired speech signal is recovered by filtering a frame of the noisy signal with a rectangular filtering matrix. With this formulation, the noise reduction problem becomes one of estimating an optimal filtering matrix. To achieve such estimation, a method is introduced to decompose a frame of the clean speech signal into two orthogonal components: One correlated and the other uncorrelated with the current desired speech vector to be estimated. Different optimization cost functions are then formulated from which non-causal optimal filtering matrices are derived. The relationships among these optimal filtering matrices are discussed. In comparison with the classical sample-based technique that uses only forward prediction, the block-based method presented in this paper exploits both the forward and backward prediction as well as the temporal interpolation and, therefore, can improve the noise reduction performance by fully taking advantage of the speech property of self correlation. There is also a side advantage of this block-based method as compared to the sample-based technique, i.e., it is computationally more efficient and, as a result, more suitable for practical implementation. PMID:23363124

  19. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  20. Sub-Optimal Ensemble Filters and distributed hydrologic modeling: a new challenge in flood forecasting

    NASA Astrophysics Data System (ADS)

    Baroncini, F.; Castelli, F.

    2009-09-01

    Data assimilation techniques based on Ensemble Filtering are widely regarded as the best approach in solving forecast and calibration problems in geophysics models. Often the implementation of statistical optimal techniques, like the Ensemble Kalman Filter, is unfeasible because of the large amount of replicas used in each time step of the model for updating the error covariance matrix. Therefore the sub optimal approach seems to be a more suitable choice. Various sub-optimal techniques were tested in atmospheric and oceanographic models, some of them are based on the detection of a "null space". Distributed Hydrologic Models differ from the other geo-fluid-dynamics models in some fundamental aspects that make complex to understanding the relative efficiency of the different suboptimal techniques. Those aspects include threshold processes , preferential trajectories for convection and diffusion, low observability of the main state variables and high parametric uncertainty. This research study is focused on such topics and explore them through some numerical experiments on an continuous hydrologic model, MOBIDIC. This model include both water mass balance and surface energy balance, so it's able to assimilate a wide variety of datasets like traditional hydrometric "on ground" measurements or land surface temperature retrieval from satellite. The experiments that we present concern to a basin of 700 kmq in center Italy, with hourly dataset on a 8 months period that includes both drought and flood events, in this first set of experiment we worked on a low spatial resolution version of the hydrologic model (3.2 km). A new Kalman Filter based algorithm is presented : this filter try to address the main challenges of hydrological modeling uncertainty. The proposed filter use in Forecast step a COFFEE (Complementary Orthogonal Filter For Efficient Ensembles) approach with a propagation of both deterministic and stochastic ensembles to improve robustness and convergence

  1. Optimization of the development process for air sampling filter standards

    NASA Astrophysics Data System (ADS)

    Mena, RaJah Marie

    Air monitoring is an important analysis technique in health physics. However, creating standards which can be used to calibrate detectors used in the analysis of the filters deployed for air monitoring can be challenging. The activity of a standard should be well understood, this includes understanding how the location within the filter affects the final surface emission rate. The purpose of this research is to determine the parameters which most affect uncertainty in an air filter standard and optimize these parameters such that calibrations made with them most accurately reflect the true activity contained inside. A deposition pattern was chosen from literature to provide the best approximation of uniform deposition of material across the filter. Samples sets were created varying the type of radionuclide, amount of activity (high activity at 6.4 -- 306 Bq/filter and one low activity 0.05 -- 6.2 Bq/filter, and filter type. For samples analyzed for gamma or beta contaminants, the standards created with this procedure were deemed sufficient. Additional work is needed to reduce errors to ensure this is a viable procedure especially for alpha contaminants.

  2. Na-Faraday rotation filtering: The optimal point

    PubMed Central

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  3. Na-Faraday rotation filtering: the optimal point.

    PubMed

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  4. Optimal Correlation Filters for Images with Signal-Dependent Noise

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Walkup, John F.

    1994-01-01

    We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.

  5. Optimization of narrow optical spectral filters for nonparallel monochromatic radiation.

    PubMed

    Linder, S L

    1967-07-01

    This paper delineates a method of determining the design criteria for narrow optical passband filters used in the reception of nonparallel modulated monochromatic radiation. The analysis results in straightforward mathematical expressions for calculating the filter width and design center wavelength which maximize the signal-to-noise ratio. Two cases are considered: (a) the filter is designed to have a maximum transmission (for normal incidence) at the incident wavelength, but with the spectral width optimized, and (b) both the design wavelength and the spectral width are optimized. It is shown that the voltage signal-to-noise ratio for case (b) is 2((1/2)) that of case (a). Numerical examples are calculated. PMID:20062163

  6. Opdic (optimized Peak, Distortion and Clutter) Detection Filter.

    NASA Astrophysics Data System (ADS)

    House, Gregory Philip

    1995-01-01

    Detection is considered. This involves determining regions of interest (ROIs) in a scene: the locations of multiple object classes in a scene in clutter when object distortions and contrast differences are present. High probability of detection P_{D} is essential and low P_{FA } is desirable since subsequent stages in the full system will only decrease P_{FA } and cannot increase P_{D }. Low resolution blob objects and objects with more internal detail are considered with both 3-D aspect view and depression angle distortions present. Extensive tests were conducted on 56 scenes with object classes not present in the training set. A modified MINACE (Minimum Noise and Correlation Energy) distortion-invariant filter was used. This minimizes correlation plane energy due to distortions and clutter while satisfying correlation peak constraint values for various object-aspect views. The filter was modified with a new object model (to give predictable output peak values) and a new correlated noise clutter model; a white Gaussian noise model of distortion was used; and a new techniques to increase the number of training set images (N _{T}) included in the filter were developed. Excellent results were obtained. However, the correlation plane distortion and clutter energy functions were found to become worse as N_{T } was increased and no rigorous method exists to select the best N_{T} (when to stop filter synthesis). A new OPDIC (Optimized Peak, Distortion, and Clutter) filter was thus devised. This filter retained the new object, clutter and distortion models noted. It minimizes the variance of the correlation peak values for all training set images (not just the N_{T} images). As N _{T} increases, the peak variance and the objective functions (correlation plane distortion and clutter energy) are all minimized. Thus, this new filter optimizes the desired functions and provides an easy way to stop filter synthesis (when the objective function is minimized). Tests show

  7. Improved step-by-step chromaticity compensation method for chromatic sextupole optimization

    NASA Astrophysics Data System (ADS)

    Gang-Wen, Liu; Zheng-He, Bai; Qi-Ka, Jia; Wei-Min, Li; Lin, Wang

    2016-05-01

    The step-by-step chromaticity compensation method for chromatic sextupole optimization and dynamic aperture increase was proposed by E. Levichev and P. Piminov (E. Levichev and P. Piminov, 2006). Although this method can be used to enlarge the dynamic aperture of a storage ring, it has some drawbacks. In this paper, we combined this method with evolutionary computation algorithms, and proposed an improved version of this method. In the improved method, the drawbacks are avoided, and thus better optimization results can be obtained. Supported by National Natural Science Foundation of China (11175182, 11175180)

  8. Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement

    NASA Astrophysics Data System (ADS)

    Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.

    In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.

  9. Degeneracy, frequency response and filtering in IMRT optimization

    NASA Astrophysics Data System (ADS)

    Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D.; Promberger, Claus

    2004-07-01

    This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques.

  10. Optimal color image restoration: Wiener filter and quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.

  11. Optimized Beam Sculpting with Generalized Fringe-rate Filters

    NASA Astrophysics Data System (ADS)

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina

    2016-03-01

    We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.

  12. Clever particle filters, sequential importance sampling and the optimal proposal

    NASA Astrophysics Data System (ADS)

    Snyder, Chris

    2014-05-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

  13. Fourier Spectral Filter Array for Optimal Multispectral Imaging.

    PubMed

    Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo

    2016-04-01

    Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data. PMID:26849867

  14. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  15. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  16. System-level optimization of baseband filters for communication applications

    NASA Astrophysics Data System (ADS)

    Delgado-Restituto, Manuel; Fernandez-Bootello, Juan F.; Rodriguez-Vazquez, Angel

    2003-04-01

    In this paper, we present a design approach for the high-level synthesis of programmable continuous-time Gm-C and active-RC filters with optimum trade-off among dynamic range, distortion products generation, area consumption and power dissipation, thus meeting the needs of more demanding baseband filter realizations. Further, the proposed technique guarantees that under all programming configurations, transconductors (in Gm-C filters) and resistors (in active-RC filters) as well as capacitors, are related by integer ratios in order to reduce the sensitivity to mismatch of the monolithic implementation. In order to solve the aforementioned trade-off, the filter must be properly scaled at each configuration. It means that filter node impedances must be conveniently altered so that the noise contribution of each node to the filter output be as low as possible, while avoiding that peak amplitudes at such nodes be so high as to drive active circuits into saturation. Additionally, in order to not degrade the distortion performance of the filter (in particular, if it is implemented using Gm-C techniques) node impedances can not be scaled independently from each other but restrictions must be imposed according to the principle of nonlinear cancellation. Altogether, the high-level synthesis can be seen as a constrained optimization problem where some of the variables, namely, the ratios among similar components, are restricted to discrete values. The proposed approach to accomplish optimum filter scaling under all programming configurations, relies on matrix methods for network representation, which allows an easy estimation of performance features such as dynamic range and power dissipation, as well as other network properties such as sensitivity to parameter variations and non-ideal effects of integrators blocks; and the use of a simulated annealing algorithm to explore the design space defined by the transfer and group delay specifications. It must be noted that such

  17. A high-contrast imaging polarimeter with a stepped-transmission filter based coronagraph

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Chao; Ren, De-Qing; Zhu, Yong-Tian; Dou, Jiang-Pei; Guo, Jing

    2016-05-01

    The light reflected from planets is polarized mainly due to Rayleigh scattering, but starlight is normally unpolarized. Thus it provides an approach to enhance the imaging contrast by inducing the imaging polarimetry technique. In this paper, we propose a high-contrast imaging polarimeter that is optimized for the direct imaging of exoplanets, combined with our recently developed stepped-transmission filter based coronagraph. Here we present the design and calibration method of the polarimetry system and the associated test of its high-contrast performance. In this polarimetry system, two liquid crystal variable retarders (LCVRs) act as a polarization modulator, which can extract the polarized signal. We show that our polarimeter can achieve a measurement accuracy of about 0.2% at a visible wavelength (632.8 nm) with linearly polarized light. Finally, the whole system demonstrates that a contrast of 10‑9 at 5λ/D is achievable, which can be used for direct imaging of Jupiter-like planets with a space telescope.

  18. Laboratory experiment of a high-contrast imaging coronagraph with new step-transmission filters

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Ren, Deqing; Zhu, Yongtian; Zhang, Xi

    2009-08-01

    We present the latest results of our laboratory experiment of the coronagraph with step-transmission filters. The primary goal of this work is to test the stability of the coronagraph and identify the main factors that limit its performance. At present, a series of step-transmission filters has been designed. These filters were manufactured with Cr film on a glass substrate with a high surface quality. During the process of the experiment of each filter, we have identified several contrast limiting factors, which includes the non-symmetry of the coating film, transmission error, scattered light and the optical aberration caused by the thickness difference of coating film. To eliminate these factors, we developed a procedure for the correct test of the coronagraph and finally it delivered a contrast in the order of 10-6~10-7 at an angular distance of 4λD, which is well consistent with theoretical design. As a follow-up effort, a deformable mirror has been manufactured to correct the wave-front error of the optical system, which should deliver better performance with an extra contrast improvement in the order of 10-2~10-3. It is shown that the step-transmission filter based coronagraph is promising for the high-contrast imaging of earth-like planets.

  19. A Neural Network-Based Optimal Spatial Filter Design Method for Motor Imagery Classification

    PubMed Central

    Yuksel, Ayhan; Olmez, Tamer

    2015-01-01

    In this study, a novel spatial filter design method is introduced. Spatial filtering is an important processing step for feature extraction in motor imagery-based brain-computer interfaces. This paper introduces a new motor imagery signal classification method combined with spatial filter optimization. We simultaneously train the spatial filter and the classifier using a neural network approach. The proposed spatial filter network (SFN) is composed of two layers: a spatial filtering layer and a classifier layer. These two layers are linked to each other with non-linear mapping functions. The proposed method addresses two shortcomings of the common spatial patterns (CSP) algorithm. First, CSP aims to maximize the between-classes variance while ignoring the minimization of within-classes variances. Consequently, the features obtained using the CSP method may have large within-classes variances. Second, the maximizing optimization function of CSP increases the classification accuracy indirectly because an independent classifier is used after the CSP method. With SFN, we aimed to maximize the between-classes variance while minimizing within-classes variances and simultaneously optimizing the spatial filter and the classifier. To classify motor imagery EEG signals, we modified the well-known feed-forward structure and derived forward and backward equations that correspond to the proposed structure. We tested our algorithm on simple toy data. Then, we compared the SFN with conventional CSP and its multi-class version, called one-versus-rest CSP, on two data sets from BCI competition III. The evaluation results demonstrate that SFN is a good alternative for classifying motor imagery EEG signals with increased classification accuracy. PMID:25933101

  20. A neural network-based optimal spatial filter design method for motor imagery classification.

    PubMed

    Yuksel, Ayhan; Olmez, Tamer

    2015-01-01

    In this study, a novel spatial filter design method is introduced. Spatial filtering is an important processing step for feature extraction in motor imagery-based brain-computer interfaces. This paper introduces a new motor imagery signal classification method combined with spatial filter optimization. We simultaneously train the spatial filter and the classifier using a neural network approach. The proposed spatial filter network (SFN) is composed of two layers: a spatial filtering layer and a classifier layer. These two layers are linked to each other with non-linear mapping functions. The proposed method addresses two shortcomings of the common spatial patterns (CSP) algorithm. First, CSP aims to maximize the between-classes variance while ignoring the minimization of within-classes variances. Consequently, the features obtained using the CSP method may have large within-classes variances. Second, the maximizing optimization function of CSP increases the classification accuracy indirectly because an independent classifier is used after the CSP method. With SFN, we aimed to maximize the between-classes variance while minimizing within-classes variances and simultaneously optimizing the spatial filter and the classifier. To classify motor imagery EEG signals, we modified the well-known feed-forward structure and derived forward and backward equations that correspond to the proposed structure. We tested our algorithm on simple toy data. Then, we compared the SFN with conventional CSP and its multi-class version, called one-versus-rest CSP, on two data sets from BCI competition III. The evaluation results demonstrate that SFN is a good alternative for classifying motor imagery EEG signals with increased classification accuracy. PMID:25933101

  1. An optimization-based parallel particle filter for multitarget tracking

    NASA Astrophysics Data System (ADS)

    Sutharsan, S.; Sinha, A.; Kirubarajan, T.; Farooq, M.

    2005-09-01

    Particle filter based estimation is becoming more popular because it has the capability to effectively solve nonlinear and non-Gaussian estimation problems. However, the particle filter has high computational requirements and the problem becomes even more challenging in the case of multitarget tracking. In order to perform data association and estimation jointly, typically an augmented state vector of target dynamics is used. As the number of targets increases, the computation required for each particle increases exponentially. Thus, parallelization is a possibility in order to achieve the real time feasibility in large-scale multitarget tracking applications. In this paper, we present a real-time feasible scheduling algorithm that minimizes the total computation time for the bus connected heterogeneous primary-secondary architecture. This scheduler is capable of selecting the optimal number of processors from a large pool of secondary processors and mapping the particles among the selected processors. Furthermore, we propose a less communication intensive parallel implementation of the particle filter without sacrificing tracking accuracy using an efficient load balancing technique, in which optimal particle migration is ensured. In this paper, we present the mathematical formulations for scheduling the particles as well as for particle migration via load balancing. Simulation results show the tracking performance of our parallel particle filter and the speedup achieved using parallelization.

  2. Multidisciplinary Analysis and Optimization Generation 1 and Next Steps

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia Gutierrez

    2008-01-01

    The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.

  3. Novel two-step filtering scheme for a logging-while-drilling system

    NASA Astrophysics Data System (ADS)

    Zhao, Qingjie; Zhang, Baojun; Hu, Huosheng

    2009-09-01

    A logging-while-drilling (LWD) system is usually deployed in the oil drilling process in order to provide real-time monitoring of the position and orientation of a hole. Encoded signals including the data coming from down-hole sensors are inevitably contaminated during their collection and transmission to the surface. Before decoding the signals into different physical parameters, the noise should be filtered out to guarantee that correct parameter values could be acquired. In this paper, according to the characteristics of LWD signals, we propose a novel two-step filtering scheme in which a dynamic part mean filtering algorithm is proposed to separate the direct current components and a windowed limited impulse response (FIR) algorithm is deployed to filter out the high-frequency noise. The scheme has been integrated into the surface processing software and the whole LWD system for the horizontal well drilling. Some experimental results are presented to show the feasibility and good performance of the proposed two-step filtering scheme.

  4. "The Design of a Compact, Wide Spurious-Suppression Bandwidth Bandpass Filter Using Stepped Impedance Resonators"

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop; Wollack, Edward J.; Doiron, Terence; Papapolymerou, John; Laskar, Joy

    2005-01-01

    We propose an analytical design for a microstrip broadband spurious-suppression filter. The proposed design uses every section of the transmission lines as both a coupling and a spurious suppression element, which creates a very compact, planar filter. While a traditional filter length is greater than the multiple of the quarter wavelength at the center passband frequency (lambda(sub g)/4), the proposed filter length is less than (order n(Ssup th) + l)center dot lambda(sub g)/8. The filter s spurious response and physical dimension are controlled by the step impedance ratio (R) between two transmission line sections as a lambda(sub g)/4 resonator. The experimental result shows that, with R of 0.2, the out-of-band attenuation is greater than 40 dB; and the first spurious mode is shifted to more than 5 times the fundamental frequency. Moreover, it is the most compact planar filter design to date. The results also indicate a low in-band insertion loss.

  5. Comparison of IMRT planning with two-step and one-step optimization: a strategy for improving therapeutic gain and reducing the integral dose

    NASA Astrophysics Data System (ADS)

    Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

    2009-12-01

    The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

  6. Quantum demolition filtering and optimal control of unstable systems.

    PubMed

    Belavkin, V P

    2012-11-28

    A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one. PMID:23091216

  7. Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.

    PubMed

    McMinn, Brian R

    2013-11-01

    Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. PMID:23796954

  8. A geometric method for optimal design of color filter arrays.

    PubMed

    Hao, Pengwei; Li, Yan; Lin, Zhouchen; Dubois, Eric

    2011-03-01

    A color filter array (CFA) used in a digital camera is a mosaic of spectrally selective filters, which allows only one color component to be sensed at each pixel. The missing two components of each pixel have to be estimated by methods known as demosaicking. The demosaicking algorithm and the CFA design are crucial for the quality of the output images. In this paper, we present a CFA design methodology in the frequency domain. The frequency structure, which is shown to be just the symbolic DFT of the CFA pattern (one period of the CFA), is introduced to represent images sampled with any rectangular CFAs in the frequency domain. Based on the frequency structure, the CFA design involves the solution of a constrained optimization problem that aims at minimizing the demosaicking error. To decrease the number of parameters and speed up the parameter searching, the optimization problem is reformulated as the selection of geometric points on the boundary of a convex polygon or the surface of a convex polyhedron. Using our methodology, several new CFA patterns are found, which outperform the currently commercialized and published ones. Experiments demonstrate the effectiveness of our CFA design methodology and the superiority of our new CFA patterns. PMID:20858581

  9. On one-step worst-case optimal trisection in univariate bi-objective Lipschitz optimization

    NASA Astrophysics Data System (ADS)

    Žilinskas, Antanas; Gimbutienė, Gražina

    2016-06-01

    The bi-objective Lipschitz optimization with univariate objectives is considered. The concept of the tolerance of the lower Lipschitz bound over an interval is generalized to arbitrary subintervals of the search region. The one-step worst-case optimality of trisecting an interval with respect to the resulting tolerance is established. The theoretical investigation supports the previous usage of trisection in other algorithms. The trisection-based algorithm is introduced. Some numerical examples illustrating the performance of the algorithm are provided.

  10. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10. PMID:25826934

  11. Effects of Rate-Limiting Steps in Transcription Initiation on Genetic Filter Motifs

    PubMed Central

    Häkkinen, Antti; Tran, Huy; Yli-Harja, Olli; Ribeiro, Andre S.

    2013-01-01

    The behavior of genetic motifs is determined not only by the gene-gene interactions, but also by the expression patterns of the constituent genes. Live single-molecule measurements have provided evidence that transcription initiation is a sequential process, whose kinetics plays a key role in the dynamics of mRNA and protein numbers. The extent to which it affects the behavior of cellular motifs is unknown. Here, we examine how the kinetics of transcription initiation affects the behavior of motifs performing filtering in amplitude and frequency domain. We find that the performance of each filter is degraded as transcript levels are lowered. This effect can be reduced by having a transcription process with more steps. In addition, we show that the kinetics of the stepwise transcription initiation process affects features such as filter cutoffs. These results constitute an assessment of the range of behaviors of genetic motifs as a function of the kinetics of transcription initiation, and thus will aid in tuning of synthetic motifs to attain specific characteristics without affecting their protein products. PMID:23940576

  12. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690

  13. Optimizing Parameters of Process-Based Terrestrial Ecosystem Model with Particle Filter

    NASA Astrophysics Data System (ADS)

    Ito, A.

    2014-12-01

    Present terrestrial ecosystem models still contain substantial uncertainties, as model intercomparison studies have shown, because of poor model constraint by observational data. So, development of advanced methodology of data-model fusion, or data-assimilation, is an important task to reduce the uncertainties and improve model predictability. In this study, I apply the Particle filter (or Sequential Monte Carlo filer) to optimize parameters of a process-based terrestrial ecosystem model (VISIT). The Particle filter is one of the data-assimilation methods, in which probability distribution of model state is approximated by many samples of parameter set (i.e., particle). This is a computationally intensive method and applicable to nonlinear systems; this is an advantage of the method in comparison with other techniques like Ensemble Kalman filter and variational method. At several sites, I used flux measurement data of atmosphere-ecosystem CO2 exchange in sequential and non-sequential manners. In the sequential data assimilation, a time-series data at 30-min or daily steps were used to optimize gas-exchange-related parameters; this method would be also effective to assimilate satellite observational data. On the other hand, in the non-sequential case, annual or long-term mean budget was adjusted to observations; this method would be also effective to assimilate carbon stock data. Although there remain technical issues (e.g., appropriate number of particles and likelihood function), I demonstrate that the Partile filter is an effective method of data-assimilation for process-based models, enhancing collaboration between field and model researchers.

  14. Effect of embedded unbiasedness on discrete-time optimal FIR filtering estimates

    NASA Astrophysics Data System (ADS)

    Zhao, Shunyi; Shmaliy, Yuriy S.; Liu, Fei; Ibarra-Manzano, Oscar; Khan, Sanowar H.

    2015-12-01

    Unbiased estimation is an efficient alternative to optimal estimation when the noise statistics are not fully known and/or the model undergoes temporary uncertainties. In this paper, we investigate the effect of embedded unbiasedness (EU) on optimal finite impulse response (OFIR) filtering estimates of linear discrete time-invariant state-space models. A new OFIR-EU filter is derived by minimizing the mean square error (MSE) subject to the unbiasedness constraint. We show that the OFIR-UE filter is equivalent to the minimum variance unbiased FIR (UFIR) filter. Unlike the OFIR filter, the OFIR-EU filter does not require the initial conditions. In terms of accuracy, the OFIR-EU filter occupies an intermediate place between the UFIR and OFIR filters. Contrary to the UFIR filter which MSE is minimized by the optimal horizon of N opt points, the MSEs in the OFIR-EU and OFIR filters diminish with N and these filters are thus full-horizon. Based upon several examples, we show that the OFIR-UE filter has higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR and Kalman filters.

  15. Optimal design of multichannel fiber Bragg grating filters using Pareto multi-objective optimization algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Liu, Tundong; Jiang, Hao

    2016-01-01

    A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.

  16. Optimization of the performances of correlation filters by pre-processing the input plane

    NASA Astrophysics Data System (ADS)

    Bouzidi, F.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Fakhfakh, A.

    2016-01-01

    We report findings on the optimization of the performances of correlation filters. First, we propound and validate an optimization of ROC curves adapted to correlation technique. Then, analysis suggests that a pre-processing of the input plane leads to a compromise between the robustness of the adapted filter and the discrimination of the inverse filter for face recognition applications. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase the performances of a VanderLugt correlator.

  17. A Triple-band Bandpass Filter using Tri-section Step-impedance and Capacitively Loaded Step-impedance Resonators for GSM, WiMAX, and WLAN systems

    NASA Astrophysics Data System (ADS)

    Chomtong, P.; Akkaraekthalin, P.

    2014-05-01

    This paper presents a triple-band bandpass filter for applications of GSM, WiMAX, and WLAN systems. The proposed filter comprises of the tri-section step-impedance and capacitively loaded step-impedance resonators, which are combined using the cross coupling technique. Additionally, tapered lines are used to connect at both ports of the filter in order to enhance matching for the tri-band resonant frequencies. The filter can operate at the resonant frequencies of 1.8 GHz, 3.7 GHz, and 5.5 GHz. At resonant frequencies, the measured values of S11 are -17.2 dB, -33.6 dB, and -17.9 dB, while the measured values of S21 are -2.23 dB, -2.98 dB, and -3.31 dB, respectively. Moreover, the presented filter has compact size compared with the conventional open-loop cross coupling triple band bandpass filters

  18. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    SciTech Connect

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  19. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGESBeta

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  20. Pattern recognition with composite correlation filters designed with multi-objective combinatorial optimization

    NASA Astrophysics Data System (ADS)

    Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul

    2015-03-01

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  1. Design of SLM-constrained MACE filters using simulated annealing optimization

    NASA Astrophysics Data System (ADS)

    Khan, Ajmal; Rajan, P. Karivaratha

    1993-10-01

    Among the available filters for pattern recognition, the MACE filter produces the sharpest peak with very small sidelobes. However, when these filters are implemented using practical spatial light modulators (SLMs), because of the constrained nature of the amplitude and phase modulation characteristics of the SLM, the implementation is no longer optimal. The resulting filter response does not produce high accuracy in the recognition of the test images. In this paper, this deterioration in response is overcome by designing constrained MACE filters such that the filter is allowed to have only those values of phase-amplitude combination that can be implemented on a specified SLM. The design is carried out using simulated annealing optimization technique. The algorithm developed and the results obtained on computer simulations of the designed filters are presented.

  2. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  3. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  4. Using a scale selective tendency filter and forward-backward time stepping to calculate consistent semi-Lagrangian trajectories

    NASA Astrophysics Data System (ADS)

    Alerskans, Emy; Kaas, Eigil

    2016-04-01

    In semi-Lagrangian models used for climate and NWP the trajectories are normally/often determined kinematically. Here we propose a new method for calculating trajectories in a more dynamically consistent way by pre-integrating the governing equations in a pseudo-Lagrangian manner using a short time step. Only non-advective adiabatic terms are included in this calculation, i.e., the Coriolis and pressure gradient force plus gravity in the momentum equations, and the divergence term in the continuity equation. This integration is performed with a forward-backward time step. Optionally, the tendencies are filtered with a local space filter, which reduces the phase speed of short wave gravity and sound waves. The filter relaxes the time step limitation related to high frequency oscillations without compromising locality of the solution. The filter can be considered as an alternative to less local or global semi-implicit solvers. Once trajectories are estimated over a complete long advective time step the full set of governing equations is stepped forward using these trajectories in combination with a flux form semi-Lagrangian formulation of the equations. The methodology is designed to improve consistency and scalability on massively parallel systems, although here it has only been verified that the technique produces realistic results in a shallow water model and a 2D model based on the full Euler equations.

  5. Optimized digital filtering techniques for radiation detection with HPGe detectors

    NASA Astrophysics Data System (ADS)

    Salathe, Marco; Kihm, Thomas

    2016-02-01

    This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.

  6. Optimization of FIR Digital Filters Using a Real Parameter Parallel Genetic Algorithm and Implementations.

    NASA Astrophysics Data System (ADS)

    Xu, Dexiang

    This dissertation presents a novel method of designing finite word length Finite Impulse Response (FIR) digital filters using a Real Parameter Parallel Genetic Algorithm (RPPGA). This algorithm is derived from basic Genetic Algorithms which are inspired by natural genetics principles. Both experimental results and theoretical studies in this work reveal that the RPPGA is a suitable method for determining the optimal or near optimal discrete coefficients of finite word length FIR digital filters. Performance of RPPGA is evaluated by comparing specifications of filters designed by other methods with filters designed by RPPGA. The parallel and spatial structures of the algorithm result in faster and more robust optimization than basic genetic algorithms. A filter designed by RPPGA is implemented in hardware to attenuate high frequency noise in a data acquisition system for collecting seismic signals. These studies may lead to more applications of the Real Parameter Parallel Genetic Algorithms in Electrical Engineering.

  7. Reduced Complexity HMM Filtering With Stochastic Dominance Bounds: A Convex Optimization Approach

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Vikram; Rojas, Cristian R.

    2014-12-01

    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds.

  8. Algorithmic and architectural optimizations for computationally efficient particle filtering.

    PubMed

    Sankaranarayanan, Aswin C; Srivastava, Ankur; Chellappa, Rama

    2008-05-01

    In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper. PMID:18390378

  9. Comparison of older adults' steps per day using NL-1000 pedometer and two GT3X+ accelerometer filters.

    PubMed

    Barreira, Tiago V; Brouillette, Robert M; Foil, Heather C; Keller, Jeffrey N; Tudor-Locke, Catrine

    2013-10-01

    The purpose of this study was to compare the steps/d derived from the ActiGraph GT3X+ using the manufacturer's default filter (DF) and low-frequency-extension filter (LFX) with those from the NL-1000 pedometer in an older adult sample. Fifteen older adults (61-82 yr) wore a GT3X+ (24 hr/day) and an NL-1000 (waking hours) for 7 d. Day was the unit of analysis (n = 86 valid days) comparing (a) GT3X+ DF and NL-1000 steps/d and (b) GT3X+ LFX and NL-1000 steps/d. DF was highly correlated with NL-1000 (r = .80), but there was a significant mean difference (-769 steps/d). LFX and NL-1000 were highly correlated (r = .90), but there also was a significant mean difference (8,140 steps/d). Percent difference and absolute percent difference between DF and NL-1000 were -7.4% and 16.0%, respectively, and for LFX and NL-1000 both were 121.9%. Regardless of filter used, GT3X+ did not provide comparable pedometer estimates of steps/d in this older adult sample. PMID:23170752

  10. Implementation and optimization of an improved morphological filtering algorithm for speckle removal based on DSPs

    NASA Astrophysics Data System (ADS)

    Liu, Qitao; Li, Yingchun; Sun, Huayan; Zhao, Yanzhong

    2008-03-01

    Laser active imaging system, which is of high resolution, anti-jamming and can be three-dimensional (3-D) imaging, has been used widely. But its imagery is usually affected by speckle noise which makes the grayscale of pixels change violently, hides the subtle details and makes the imaging resolution descend greatly. Removing speckle noise is one of the most difficult problems encountered in this system because of the poor statistical property of speckle. Based on the analysis of the statistical characteristic of speckle and morphological filtering algorithm, in this paper, an improved multistage morphological filtering algorithm is studied and implemented on TMS320C6416 DSP. The algorithm makes the morphological open-close and close-open transformation by using two different linear structure elements respectively, and then takes a weighted average over the above transformational results. The weighted coefficients are decided by the statistical characteristic of speckle. This algorithm is implemented on the TMS320C6416 DSPs after simulation on computer. The procedure of software design is fully presented. The methods are fully illustrated to achieve and optimize the algorithm in the research of the structural characteristic of TMS320C6416 DSP and feature of the algorithm. In order to fully benefit from such devices and increase the performance of the whole system, it is necessary to take a series of steps to optimize the DSP programs. This paper introduces some effective methods, including refining code structure, eliminating memory dependence, optimizing assembly code via linear assembly and so on, for TMS320C6x C language optimization and then offers the results of the application in a real-time implementation. The results of processing to the images blurred by speckle noise shows that the algorithm can not only effectively suppress speckle noise but also preserve the geometrical features of images. The results of the optimized code running on the DSP platform

  11. Bio-desulfurization of biogas using acidic biotrickling filter with dissolved oxygen in step feed recirculation.

    PubMed

    Chaiprapat, Sumate; Charnnok, Boonya; Kantachote, Duangporn; Sung, Shihwu

    2015-03-01

    Triple stage and single stage biotrickling filters (T-BTF and S-BTF) were operated with oxygenated liquid recirculation to enhance bio-desulfurization of biogas. Empty bed retention time (EBRT 100-180 s) and liquid recirculation velocity (q 2.4-7.1 m/h) were applied. H2S removal and sulfuric acid recovery increased with higher EBRT and q. But the highest q at 7.1 m/h induced large amount of liquid through the media, causing a reduction in bed porosity in S-BTF and H2S removal. Equivalent performance of S-BTF and T-BTF was obtained under the lowest loading of 165 gH2S/m(3)/h. In the subsequent continuous operation test, it was found that T-BTF could maintain higher H2S elimination capacity and removal efficiency at 175.6±41.6 gH2S/m(3)/h and 89.0±6.8% versus S-BTF at 159.9±42.8 gH2S/m(3)/h and 80.1±10.2%, respectively. Finally, the relationship between outlet concentration and bed height was modeled. Step feeding of oxygenated liquid recirculation in multiple stages clearly demonstrated an advantage for sulfide oxidation. PMID:25569031

  12. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J.; Kapteyn, Henry C.

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  13. Optimized filtering of regional and teleseismic seismograms: results of maximizing SNR measurements from the wavelet transform and filter banks

    SciTech Connect

    Leach, R.R.; Schultz, C.; Dowla, F.

    1997-07-15

    Development of a worldwide network to monitor seismic activity requires deployment of seismic sensors in areas which have not been well studied or may have from available recordings. Development and testing of detection and discrimination algorithms requires a robust representative set of calibrated seismic events for a given region. Utilizing events with poor signal-to-noise (SNR) can add significant numbers to usable data sets, but these events must first be adequately filtered. Source and path effects can make this a difficult task as filtering demands are highly varied as a function of distance, event magnitude, bearing, depth etc. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. In addition, filter parameters are often overly generalized or contain complicated switching. We have developed a method to provide an optimized filter for any regional or teleseismically recorded event. Recorded seismic signals contain arrival energy which is localized in frequency and time. Localized temporal signals whose frequency content is different from the frequency content of the pre-arrival record are identified using rms power measurements. The method is based on the decomposition of a time series into a set of time series signals or scales. Each scale represents a time-frequency band with a constant Q. SNR is calculated for a pre-event noise window and for a window estimated to contain the arrival. Scales with high SNR are used to indicate the band pass limits for the optimized filter.The results offer a significant improvement in SNR particularly for low SNR events. Our method provides a straightforward, optimized filter which can be immediately applied to unknown regions as knowledge of the geophysical characteristics is not required. The filtered signals can be used to map the seismic frequency response of a region and may provide improvements in travel-time picking, bearing estimation

  14. An optimal numerical filter for wide-field-of-view measurements of earth-emitted radiation

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; House, F. B.

    1981-01-01

    A technique is described in which all data points along an arc of the orbit may be used in an optimal numerical filter for wide-field-of-view measurements of earth emitted radiation. The statistical filter design is derived whereby the filter is required to give a minimum variance estimate of the radiative exitance at discrete points along the ground track of the satellite. An equation for the optimal numerical filter is given by minimizing the estimate error variance equation with respect to the filter weights, resulting in a discrete form of the Wiener-Hopf equation. Finally, variances of the errors in the radiant exitance can be computed along the ground track and in the cross track directions.

  15. Particle filter with one-step randomly delayed measurements and unknown latency probability

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Huang, Yulong; Li, Ning; Zhao, Lin

    2016-01-01

    In this paper, a new particle filter is proposed to solve the nonlinear and non-Gaussian filtering problem when measurements are randomly delayed by one sampling time and the latency probability of the delay is unknown. In the proposed method, particles and their weights are updated in Bayesian filtering framework by considering the randomly delayed measurement model, and the latency probability is identified by maximum likelihood criterion. The superior performance of the proposed particle filter as compared with existing methods and the effectiveness of the proposed identification method of latency probability are both illustrated in two numerical examples concerning univariate non-stationary growth model and bearing only tracking.

  16. Optimization of continuous tube motion and step-and-shoot motion in digital breast tomosynthesis systems with patient motion

    NASA Astrophysics Data System (ADS)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2012-03-01

    In digital breast tomosynthesis (DBT), a reconstruction of the breast is generated from projections acquired over a limited range of x-ray tube angles. There are two principal schemes for acquiring projections, continuous tube motion and step-and-shoot motion. Although continuous tube motion has the benefit of reducing patient motion by lowering scan time, it has the drawback of introducing blurring artifacts due to focal spot motion. The purpose of this work is to determine the optimal scan time which minimizes this trade-off. To this end, the filtered backprojection reconstruction of a sinusoidal input is calculated. At various frequencies, the optimal scan time is determined by the value which maximizes the modulation of the reconstruction. Although prior authors have studied the dependency of the modulation on focal spot motion, this work is unique in also modeling patient motion. It is shown that because continuous tube motion and patient motion have competing influences on whether scan time should be long or short, the modulation is maximized by an intermediate scan time. This optimal scan time decreases with object velocity and increases with exposure time. To optimize step-and-shoot motion, we calculate the scan time for which the modulation attains the maximum value achievable in a comparable system with continuous tube motion. This scan time provides a threshold below which the benefits of step-and-shoot motion are justified. In conclusion, this work optimizes scan time in DBT systems with patient motion and either continuous tube motion or step-and-shoot motion by maximizing the modulation of the reconstruction.

  17. Linear adaptive noise-reduction filters for tomographic imaging: Optimizing for minimum mean square error

    SciTech Connect

    Sun, W Y

    1993-04-01

    This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

  18. Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared spectroscopy

    PubMed Central

    Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.

    2011-01-01

    An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (≤ 32 cm−1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm−1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

  19. Design of an optimal-weighted MACE filter realizable with arbitrary SLM constraints

    NASA Astrophysics Data System (ADS)

    Ge, Jin; Rajan, P. Karivaratha

    1996-03-01

    A realizable optimal weighted minimum average correlation energy (MACE) filter with arbitrary spatial light modulator (SLM) constraints is presented. The MACE filter can be considered as the cascade of two separate stages. The first stage is the prewhitener which essentially converts colored noise to white noise. The second stage is the conventional synthetic discriminant function (SDF) which is optimal for white noise, but which uses training vectors subjected to the prewhitening transformation. So the energy spectrum matrix is very important for filter design. New weight function we introduce is used to adjust the correlation energy to improve the performance of MACE filter on current SLMs. The action of the weight function is to emphasize the importance of the signal energy at some frequencies and reduce the importance of signal energy at some other frequencies so as to improve correlation plane structure. The choice of weight function which is used to enhance the noise tolerance and reduce sidelobes is related to a priori pattern recognition knowledge. An algorithm which combines an iterative optimal technique with Juday's minimum Euclidean distance (MED) method is developed for the design of the realizable optimal weighted MACE filter. The performance of the designed filter is evaluated with numerical experiments.

  20. On the application of optimal wavelet filter banks for ECG signal classification

    NASA Astrophysics Data System (ADS)

    Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvão, R. K. H.

    2014-03-01

    This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.

  1. Biologic efficacy optimization-a step towards personalized medicine.

    PubMed

    Kiely, Patrick D W

    2016-05-01

    This following is a review of the factors that influence the outcome of biologic agents in the treatment of adult RA and, when synthesized into the clinical decision-making process, enhance optimization. Adiposity can exacerbate inflammatory diseases; patients with high BMI have worse outcomes from RA, including TNF inhibitors (TNFis), whereas the efficacy of abatacept and tocilizumab is unaffected. Smoking adversely affects TNFi outcomes but has less or no effect on the efficacy of rituximab and tocilizumab, and the effect on abatacept is unknown. Patients who are positive for ACPA and RF have better efficacy with rituximab and abatacept than those who are seronegative, whereas the influence of serotype is less significant for tocilizumab and more complex for TNFis. All biologics seem to do better when co-prescribed with MTX, whereas in monotherapy, tocilizumab is superior to adalimumab and prescription of a non-MTX DMARD has advantages over no DMARD for rituximab and adalimumab. Monitoring of TNFi drug levels is an exciting new field, correlating closely with efficacy in RA and PsA, and is influenced by BMI, adherence, co-prescribed DMARDs and anti-drug antibodies. The measurement of trough levels provides a potential tool for patients who are not doing well to determine early whether to switch within the TNFi class (if levels are low) or to a biologic with an alternative mode of action (if levels are normal or high). Conversely, the finding of supratherapeutic levels has the potential to enable individual patient selection for dose reduction without the risk of flare. PMID:26424837

  2. Empirical Determination of Optimal Parameters for Sodium Double-Edge Magneto-Optic Filters

    NASA Astrophysics Data System (ADS)

    Barry, Ian F.; Huang, Wentao; Smith, John A.; Chu, Xinzhao

    2016-06-01

    A method is proposed for determining the optimal temperature and magnetic field strength used to condition a sodium vapor cell for use in a sodium Double-Edge Magneto-Optic Filter (Na-DEMOF). The desirable characteristics of these filters are first defined and then analyzed over a range of temperatures and magnetic field strengths, using an IDL Faraday filter simulation adapted for the Na-DEMOF. This simulation is then compared to real behavior of a Na-DEMOF constructed for use with the Chu Research Group's STAR Na Doppler resonance-fluorescence lidar for lower atmospheric observations.

  3. Optimization of primer specific filter metrics for the assessment of mitochondrial DNA sequence data

    PubMed Central

    CURTIS, PAMELA C.; THOMAS, JENNIFER L.; PHILLIPS, NICOLE R.; ROBY, RHONDA K.

    2011-01-01

    Filter metrics are used as a quick assessment of sequence trace files in order to sort data into different categories, i.e. High Quality, Review, and Low Quality, without human intervention. The filter metrics consist of two numerical parameters for sequence quality assessment: trace score (TS) and contiguous read length (CRL). Primer specific settings for the TS and CRL were established using a calibration dataset of 2817 traces and validated using a concordance dataset of 5617 traces. Prior to optimization, 57% of the traces required manual review before import into a sequence analysis program, whereas after optimization only 28% of the traces required manual review. After optimization of primer specific filter metrics for mitochondrial DNA sequence data, an overall reduction of review of trace files translates into increased throughput of data analysis and decreased time required for manual review. PMID:21171863

  4. Multiple Model Adaptive Two-Step Filter and Motion Tracking Sliding-Mode Guidance for Missiles with Time Lag in Acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Di; Zhang, Yong-An; Duan, Guang-Ren

    The two-step filter has been combined with a modified Sage-Husa time-varying measurement noise statistical estimator, which is able to estimate the covariance of measurement noise on line, to generate an adaptive two-step filter. In many practical applications such as the bearings-only guidance, some model parameters and the process noise covariance are also unknown a priori. Based on the adaptive two-step filter, we utilize multiple models in the first-step filtering as well as in the time update of the second-step filtering to handle the uncertainties of model parameters and process noise covariance. In each timestep of the multiple model filtering, probabilistic weights punishing the estimates of first-step state from different models, and their associated covariance matrices are acquired according to Bayes’ rule. The weighted sum of the estimates of first-step state and that of the associated covariance matrices are extracted as the ultimate estimate and covariance of the first-step state, and are used as measurement information for the measurement update of the second-step state. Thus there is still only one iteration process and no apparent enhancement of computation burden. A motion tracking sliding-mode guidance law is presented for missiles with non-negligible delays in actual acceleration. This guidance law guarantees guidance accuracy and is able to enhance observability in bearings-only tracking. In bearings-only cases, the multiple model adaptive two-step filter is applied to the motion tracking sliding-mode guidance law, supplying relative range, relative velocity, and target acceleration information. In simulation experiments satisfactory filtering and guidance results are obtained, even if the filter runs into unknown target maneuvers and unknown time-varying measurement noise covariance, and the guidance law has to deal with a large time lag in acceleration.

  5. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486

  6. Two-stage hybrid optimization of fiber Bragg gratings for design of linear phase filters.

    PubMed

    Zheng, Rui Tao; Ngo, Nam Quoc; Le Binh, Nguyen; Tjin, Swee Chuan

    2004-12-01

    We present a new hybrid optimization method for the synthesis of fiber Bragg gratings (FBGs) with complex characteristics. The hybrid optimization method is a two-tier search that employs a global optimization algorithm [i.e., the tabu search (TS) algorithm] and a local optimization method (i.e., the quasi-Netwon method). First the TS global optimization algorithm is used to find a "promising" FBG structure that has a spectral response as close as possible to the targeted spectral response. Then the quasi-Newton local optimization method is applied to further optimize the FBG structure obtained from the TS algorithm to arrive at a targeted spectral response. A dynamic mechanism for weighting of different requirements of the spectral response is employed to enhance the optimization efficiency. To demonstrate the effectiveness of the method, the synthesis of three linear-phase optical filters based on FBGs with different grating lengths is described. PMID:15603077

  7. Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

    USGS Publications Warehouse

    Safak, Erdal

    1989-01-01

    This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

  8. Optimal filter parameters for low SNR seismograms as a function of station and event location

    NASA Astrophysics Data System (ADS)

    Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.

    1999-06-01

    Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.

  9. A three-step test of phosphate sorption efficiency of potential agricultural drainage filter materials.

    PubMed

    Lyngsie, G; Borggaard, O K; Hansen, H C B

    2014-03-15

    Phosphorus (P) eutrophication of lakes and streams, coming from drained farmlands, is a serious problem in areas with intensive agriculture. Installation of P sorbing filters at drain outlets may be a solution. Efficient sorbents to be used for such filters must possess high P bonding affinity to retain ortho-phosphate (Pi) at low concentrations. In addition high P sorption capacity, fast bonding and low desorption is necessary. In this study five potential filter materials (Filtralite-P(®), limestone, calcinated diatomaceous earth, shell-sand and iron-oxide based CFH) in four particle size intervals were investigated under field relevant P concentrations (0-161 μM) and retentions times of 0-24 min. Of the five materials examined, the results from P sorption and desorption studies clearly demonstrate that the iron based CFH is superior as a filter material compared to calcium based materials when tested against criteria for sorption affinity, capacity and stability. The finest CFH and Filtralite-P(®) fractions (0.05-0.5 mm) were best with P retention of ≥90% of Pi from an initial concentration of 161 μM corresponding to 14.5 mmol/kg sorbed within 24 min. They were further capable to retain ≥90% of Pi from an initially 16 μM solution within 1½ min. However, only the finest CFH fraction was also able to retain ≥90% of Pi sorbed from the 16 μM solution against 4 times desorption sequences with 6 mM KNO3. Among the materials investigated, the finest CFH fraction is therefore the only suitable filter material, when very fast and strong bonding of high Pi concentrations is needed, e.g. in drains under P rich soils during extreme weather conditions. PMID:24275107

  10. Improved design and optimization of subsurface flow constructed wetlands and sand filters

    NASA Astrophysics Data System (ADS)

    Brovelli, A.; Carranza-Díaz, O.; Rossi, L.; Barry, D. A.

    2010-05-01

    Subsurface flow constructed wetlands and sand filters are engineered systems capable of eliminating a wide range of pollutants from wastewater. These devices are easy to operate, flexible and have low maintenance costs. For these reasons, they are particularly suitable for small settlements and isolated farms and their use has substantially increased in the last 15 years. Furthermore, they are also becoming used as a tertiary - polishing - step in traditional treatment plants. Recent work observed that research is however still necessary to understand better the biogeochemical processes occurring in the porous substrate, their mutual interactions and feedbacks, and ultimately to identify the optimal conditions to degrade or remove from the wastewater both traditional and anthropogenic recalcitrant pollutants, such as hydrocarbons, pharmaceuticals, personal care products. Optimal pollutant elimination is achieved if the contact time between microbial biomass and the contaminated water is sufficiently long. The contact time depends on the hydraulic residence time distribution (HRTD) and is controlled by the hydrodynamic properties of the system. Previous reports noted that poor hydrodynamic behaviour is frequent, with water flowing mainly through preferential paths resulting in a broad HRTD. In such systems the flow rate must be decreased to allow a sufficient proportion of the wastewater to experience the minimum residence time. The pollutant removal efficiency can therefore be significantly reduced, potentially leading to the failure of the system. The aim of this work was to analyse the effect of the heterogeneous distribution of the hydraulic properties of the porous substrate on the HRTD and treatment efficiency, and to develop an improved design methodology to reduce the risk of system failure and to optimize existing systems showing poor hydrodynamics. Numerical modelling was used to evaluate the effect of substrate heterogeneity on the breakthrough curves of

  11. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  12. Optimal matched filter design for ultrasonic NDE of coarse grain materials

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Hayward, Gordon

    2016-02-01

    Coarse grain materials are widely used in a variety of key industrial sectors like energy, oil and gas, and aerospace due to their attractive properties. However, when these materials are inspected using ultrasound, the flaw echoes are usually contaminated by high-level, correlated grain noise originating from the material microstructures, which is time-invariant and demonstrates similar spectral characteristics as flaw signals. As a result, the reliable inspection of such materials is highly challenging. In this paper, we present a method for reliable ultrasonic non-destructive evaluation (NDE) of coarse grain materials using matched filters, where the filter is designed to approximate and match the unknown defect echoes, and a particle swarm optimization (PSO) paradigm is employed to search for the optimal parameters in the filter response with an objective to maximise the output signal-to-noise ratio (SNR). Experiments with a 128-element 5MHz transducer array on mild steel and INCONEL Alloy 617 samples are conducted, and the results confirm that the SNR of the images is improved by about 10-20 dB if the optimized matched filter is applied to all the A-scan waveforms prior to image formation. Furthermore, the matched filter can be implemented in real-time with low extra computational cost.

  13. Optimization of high speed pipelining in FPGA-based FIR filter design using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Botella, Guillermo; Romero, David E. T.; Kumm, Martin

    2012-06-01

    This paper compares FPGA-based full pipelined multiplierless FIR filter design options. Comparison of Distributed Arithmetic (DA), Common Sub-Expression (CSE) sharing and n-dimensional Reduced Adder Graph (RAG-n) multiplierless filter design methods in term of size, speed, and A*T product are provided. Since DA designs are table-based and CSE/RAG-n designs are adder-based, FPGA synthesis design data are used for a realistic comparison. Superior results of a genetic algorithm based optimization of pipeline registers and non-output fundamental coefficients are shown. FIR filters (posted as open source by Kastner et al.) for filters in the length from 6 to 151 coefficients are used.

  14. Preparation and optimization of the laser thin film filter

    NASA Astrophysics Data System (ADS)

    Su, Jun-hong; Wang, Wei; Xu, Jun-qi; Cheng, Yao-jin; Wang, Tao

    2014-08-01

    A co-colored thin film device for laser-induced damage threshold test system is presented in this paper, to make the laser-induced damage threshold tester operating at 532nm and 1064nm band. Through TFC simulation software, a film system of high-reflection, high -transmittance, resistance to laser damage membrane is designed and optimized. Using thermal evaporation technique to plate film, the optical properties of the coating and performance of the laser-induced damage are tested, and the reflectance and transmittance and damage threshold are measured. The results show that, the measured parameters, the reflectance R >= 98%@532nm, the transmittance T >= 98%@1064nm, the laser-induced damage threshold LIDT >= 4.5J/cm2 , meet the design requirements, which lays the foundation of achieving laser-induced damage threshold multifunction tester.

  15. Performance optimization of total momentum filtering double-resonance energy selective electron heat pump

    NASA Astrophysics Data System (ADS)

    Ding, Ze-Min; Chen, Lin-Gen; Ge, Yan-Lin; Sun, Feng-Rui

    2016-04-01

    A theoretical model for energy selective electron (ESE) heat pumps operating with two-dimensional electron reservoirs is established in this study. In this model, a double-resonance energy filter operating with a total momentum filtering mechanism is considered for the transmission of electrons. The optimal thermodynamic performance of the ESE heat pump devices is also investigated. Numerical calculations show that the heating load of the device with two resonances is larger, whereas the coefficient of performance (COP) is lower than the ESE heat pump when considering a single-resonance filter. The performance characteristics of the ESE heat pumps in the total momentum filtering condition are generally superior to those with a conventional filtering mechanism. In particular, the performance characteristics of the ESE heat pumps considering a conventional filtering mechanism are vastly different from those of a device with total momentum filtering, which is induced by extra electron momentum in addition to the horizontal direction. Parameters such as resonance width and energy spacing are found to be associated with the performance of the electron system.

  16. An optimal target-filter system for electron beam generated x-ray spectra

    SciTech Connect

    Hsu, Hsiao-Hua; Vasilik, D.G.; Chen, J.

    1994-04-01

    An electron beam generated x-ray spectrum consists of characteristic x rays of the target and continuous bremsstrahlung. The percentage of characteristic x rays over the entire energy spectrum depends on the beam energy and the filter thickness. To determine the optimal electron beam energy and filter thickness, one can either conduct many experimental measurements, or perform a series of Monte Carlo simulations. Monte Carlo simulations are shown to be an efficient tool for determining the optimal target-filter system for electron beam generated x-ray spectra. Three of the most commonly used low-energy x-ray metal targets (Cu, Zn and Mo) are chosen for this study to illustrate the power of Monte Carlo simulations.

  17. Plate/shell topological optimization subjected to linear buckling constraints by adopting composite exponential filtering function

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang

    2016-08-01

    In this paper, a model of topology optimization with linear buckling constraints is established based on an independent and continuous mapping method to minimize the plate/shell structure weight. A composite exponential function (CEF) is selected as filtering functions for element weight, the element stiffness matrix and the element geometric stiffness matrix, which recognize the design variables, and to implement the changing process of design variables from "discrete" to "continuous" and back to "discrete". The buckling constraints are approximated as explicit formulations based on the Taylor expansion and the filtering function. The optimization model is transformed to dual programming and solved by the dual sequence quadratic programming algorithm. Finally, three numerical examples with power function and CEF as filter function are analyzed and discussed to demonstrate the feasibility and efficiency of the proposed method.

  18. Optimized split-step method for modeling nonlinear pulse propagation in fiber Bragg gratings

    SciTech Connect

    Toroker, Zeev; Horowitz, Moshe

    2008-03-15

    We present an optimized split-step method for solving nonlinear coupled-mode equations that model wave propagation in nonlinear fiber Bragg gratings. By separately controlling the spatial and the temporal step size of the solution, we could significantly decrease the run time duration without significantly affecting the result accuracy. The accuracy of the method and the dependence of the error on the algorithm parameters are studied in several examples. Physical considerations are given to determine the required resolution.

  19. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  20. Decoupled Control Strategy of Grid Interactive Inverter System with Optimal LCL Filter Design

    NASA Astrophysics Data System (ADS)

    Babu, B. Chitti; Anurag, Anup; Sowmya, Tontepu; Marandi, Debati; Bal, Satarupa

    2013-09-01

    This article presents a control strategy for a three-phase grid interactive voltage source inverter that links a renewable energy source to the utility grid through a LCL-type filter. An optimized LCL-type filter has been designed and modeled so as to reduce the current harmonics in the grid, considering the conduction and switching losses at constant modulation index (Ma). The control strategy adopted here decouples the active and reactive power loops, thus achieving desirable performance with independent control of active and reactive power injected into the grid. The startup transients can also be controlled by the implementation of this proposed control strategy: in addition to this, optimal LCL filter with lesser conduction and switching copper losses as well as core losses. A trade-off has been made between the total losses in the LCL filter and the Total Harmonic Distortion (THD%) of the grid current, and the filter inductor has been designed accordingly. In order to study the dynamic performance of the system and to confirm the analytical results, the models are simulated in the MATLAB/Simulink environment, and the results are analyzed.

  1. Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft

    NASA Astrophysics Data System (ADS)

    Rasotto, M.; Armellin, R.; Di Lizia, P.

    2016-03-01

    An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.

  2. Design Optimization of Vena Cava Filters: An application to dual filtration devices

    SciTech Connect

    Singer, M A; Wang, S L; Diachin, D P

    2009-12-03

    Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.

  3. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  4. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  5. An optimal linear filter for the reduction of noise superimposed to the EEG signal.

    PubMed

    Bartoli, F; Cerutti, S

    1983-10-01

    In the present paper a procedure for the reduction of super-imposed noise on EEG tracings is described, which makes use of linear digital filtering and identification methods. In particular, an optimal filter (a Kalman filter) has been developed which is intended to capture the disturbances of the electromyographic noise on the basis of an a priori modelling which considers a series of impulses with a temporal occurrence according to a Poisson distribution as a noise generating mechanism. The experimental results refer to the EEG tracings recorded from 20 patients in normal resting conditions: the procedure consists of a preprocessing phase (which uses also a low-pass FIR digital filter), followed by the implementation of the identification and the Kalman filter. The performance of the filters is satisfactory also from the clinical standpoint, obtaining a marked reduction of noise without distorting the useful information contained in the signal. Furthermore, when using the introduced method, the EEG signal generating mechanism is accordingly parametrized as AR/ARMA models, thus obtaining an extremely sensitive feature extraction with interesting and not yet completely studied pathophysiological meanings. The above procedure may find a general application in the field of noise reduction and the better enhancement of information contained in the wide set of biological signals. PMID:6632838

  6. Optimal design of 2D digital filters based on neural networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-hua; He, Yi-gang; Zheng, Zhe-zhao; Zhang, Xu-hong

    2005-02-01

    Two-dimensional (2-D) digital filters are widely useful in image processing and other 2-D digital signal processing fields,but designing 2-D filters is much more difficult than designing one-dimensional (1-D) ones.In this paper, a new design approach for designing linear-phase 2-D digital filters is described,which is based on a new neural networks algorithm (NNA).By using the symmetry of the given 2-D magnitude specification,a compact express for the magnitude response of a linear-phase 2-D finite impulse response (FIR) filter is derived.Consequently,the optimal problem of designing linear-phase 2-D FIR digital filters is turned to approximate the desired 2-D magnitude response by using the compact express.To solve the problem,a new NNA is presented based on minimizing the mean-squared error,and the convergence theorem is presented and proved to ensure the designed 2-D filter stable.Three design examples are also given to illustrate the effectiveness of the NNA-based design approach.

  7. Hair enhancement in dermoscopic images using dual-channel quaternion tubularness filters and MRF-based multilabel optimization.

    PubMed

    Mirzaalian, Hengameh; Lee, Tim K; Hamarneh, Ghassan

    2014-12-01

    Hair occlusion is one of the main challenges facing automatic lesion segmentation and feature extraction for skin cancer applications. We propose a novel method for simultaneously enhancing both light and dark hairs with variable widths, from dermoscopic images, without the prior knowledge of the hair color. We measure hair tubularness using a quaternion color curvature filter. We extract optimal hair features (tubularness, scale, and orientation) using Markov random field theory and multilabel optimization. We also develop a novel dual-channel matched filter to enhance hair pixels in the dermoscopic images while suppressing irrelevant skin pixels. We evaluate the hair enhancement capabilities of our method on hair-occluded images generated via our new hair simulation algorithm. Since hair enhancement is an intermediate step in a computer-aided diagnosis system for analyzing dermoscopic images, we validate our method and compare it to other methods by studying its effect on: 1) hair segmentation accuracy; 2) image inpainting quality; and 3) image classification accuracy. The validation results on 40 real clinical dermoscopic images and 94 synthetic data demonstrate that our approach outperforms competing hair enhancement methods. PMID:25312927

  8. Global localization of 3D anatomical structures by pre-filtered Hough forests and discrete optimization.

    PubMed

    Donner, René; Menze, Bjoern H; Bischof, Horst; Langs, Georg

    2013-12-01

    The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates' weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450

  9. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method.

    PubMed

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-12-01

    In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8-2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75-150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  10. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method

    PubMed Central

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-01-01

    ABSTRACT In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8–2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75–150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  11. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  12. Two-step fringe pattern analysis with a Gabor filter bank

    NASA Astrophysics Data System (ADS)

    Rivera, Mariano; Dalmau, Oscar; Gonzalez, Adonai; Hernandez-Lopez, Francisco

    2016-10-01

    We propose a two-shot fringe analysis method for Fringe Patterns (FPs) with random phase-shift and changes in illumination components. These conditions reduce the acquisition time and simplify the experimental setup. Our method builds upon a Gabor Filter (GF) bank that eliminates noise and estimates the phase from the FPs. The GF bank allows us to obtain two phase maps with a sign ambiguity between them. Due to the fact that the random sign map is common to both computed phases, we can correct the sign ambiguity. We estimate a local phase-shift from the absolute wrapped residual between the estimated phases. Next, we robustly compute the global phase-shift. In order to unwrap the phase, we propose a robust procedure that interpolates unreliable phase regions obtained after applying the GF bank. We present numerical experiments that demonstrate the performance of our method.

  13. A two-step crushed lava rock filter unit for grey water treatment at household level in an urban slum.

    PubMed

    Katukiza, A Y; Ronteltap, M; Niwagaba, C B; Kansiime, F; Lens, P N L

    2014-01-15

    Decentralised grey water treatment in urban slums using low-cost and robust technologies offers opportunities to minimise public health risks and to reduce environmental pollution caused by the highly polluted grey water i.e. with a COD and N concentration of 3000-6000 mg L(-1) and 30-40 mg L(-1), respectively. However, there has been very limited action research to reduce the pollution load from uncontrolled grey water discharge by households in urban slums. This study was therefore carried out to investigate the potential of a two-step filtration process to reduce the grey water pollution load in an urban slum using a crushed lava rock filter, to determine the main filter design and operation parameters and the effect of intermittent flow on the grey water effluent quality. A two-step crushed lava rock filter unit was designed and implemented for use by a household in the Bwaise III slum in Kampala city (Uganda). It was monitored at a varying hydraulic loading rate (HLR) of 0.5-1.1 m d(-1) as well as at a constant HLR of 0.39 m d(-1). The removal efficiencies of COD, TP and TKN were, respectively, 85.9%, 58% and 65.5% under a varying HLR and 90.5%, 59.5% and 69%, when operating at a constant HLR regime. In addition, the log removal of Escherichia coli, Salmonella spp. and total coliforms was, respectively, 3.8, 3.2 and 3.9 under the varying HLR and 3.9, 3.5 and 3.9 at a constant HLR. The results show that the use of a two-step filtration process as well as a lower constant HLR increased the pollutant removal efficiencies. Further research is needed to investigate the feasibility of adding a tertiary treatment step to increase the nutrients and microorganisms removal from grey water. PMID:24388927

  14. Design and optimization of a harmonic probe with step cross section in multifrequency atomic force microscopy.

    PubMed

    Cai, Jiandong; Wang, Michael Yu; Zhang, Li

    2015-12-01

    In multifrequency atomic force microscopy (AFM), probe's characteristic of assigning resonance frequencies to integer harmonics results in a remarkable improvement of detection sensitivity at specific harmonic components. The selection criterion of harmonic order is based on its amplitude's sensitivity on material properties, e.g., elasticity. Previous studies on designing harmonic probe are unable to provide a large design capability along with maintaining the structural integrity. Herein, we propose a harmonic probe with step cross section, in which it has variable width in top and bottom steps, while the middle step in cross section is kept constant. Higher order resonance frequencies are tailored to be integer times of fundamental resonance frequency. The probe design is implemented within a structural optimization framework. The optimally designed probe is micromachined using focused ion beam milling technique, and then measured with an AFM. The measurement results agree well with our resonance frequency assignment requirement. PMID:26724066

  15. Design and optimization of a harmonic probe with step cross section in multifrequency atomic force microscopy

    SciTech Connect

    Cai, Jiandong; Zhang, Li; Wang, Michael Yu

    2015-12-15

    In multifrequency atomic force microscopy (AFM), probe’s characteristic of assigning resonance frequencies to integer harmonics results in a remarkable improvement of detection sensitivity at specific harmonic components. The selection criterion of harmonic order is based on its amplitude’s sensitivity on material properties, e.g., elasticity. Previous studies on designing harmonic probe are unable to provide a large design capability along with maintaining the structural integrity. Herein, we propose a harmonic probe with step cross section, in which it has variable width in top and bottom steps, while the middle step in cross section is kept constant. Higher order resonance frequencies are tailored to be integer times of fundamental resonance frequency. The probe design is implemented within a structural optimization framework. The optimally designed probe is micromachined using focused ion beam milling technique, and then measured with an AFM. The measurement results agree well with our resonance frequency assignment requirement.

  16. [Optimization of one-step pelletization technology of Jiuwei Xifeng granules by response surface methodology].

    PubMed

    Wang, Xiu-hai; Yang, Xu-fang; Fan, Ye-wen; Zhang, Yan-jun; Xu, Zhong-kun; Yang, Lin-yong; Wang, Zhen-zhong; Xiao, Wei

    2014-12-01

    Using the qualified rates of particles as the evaluation indexes, the impact tactors of one-step pelletization technology of Jiuwei Xifeng granules were selected from six factors by the Plackett-Burman experimental design and the levels of non-significant factors were identified. According to the Plackett-Burman experimental design, choosing the qualified rates of particles and angle of repose as the evaluation indexes, three levels of the three factors were selected by Box-Behnken of central composite design to optimize the experimental. The best conditions were as follows: the fluid extract was sprayed with frequency of 29 r . min-1, inlet air temperature was 90 °C, the frequency of fan was 34 Hz. Under the response surface methodology optimized scheme, the average experimental results are similar to the predicted values, and surface methodology could be used in the optimization of one-step pelletization for Chinese materia medica. PMID:25898578

  17. Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit

    NASA Astrophysics Data System (ADS)

    Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping

    2006-05-01

    Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss.

  18. Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit.

    PubMed

    Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping

    2006-05-29

    Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss. PMID:19516623

  19. Design and optimization of stepped austempered ductile iron using characterization techniques

    SciTech Connect

    Hernández-Rivera, J.L.; Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J.

    2013-09-15

    Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.

  20. AFM tip characterization by using FFT filtered images of step structures.

    PubMed

    Yan, Yongda; Xue, Bo; Hu, Zhenjiang; Zhao, Xuesen

    2016-01-01

    The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods. PMID:26517548

  1. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  2. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  3. Digital restoration of indium-111 and iodine-123 SPECT images with optimized Metz filters

    SciTech Connect

    King, M.A.; Schwinger, R.B.; Penney, B.C.; Doherty, P.W.; Bianco, J.A.

    1986-08-01

    A number of radiopharmaceuticals of great current clinical interest for imaging are labeled with radionuclides that emit medium- to high-energy photons either as their primary radiation, or in low abundance in addition to their primary radiation. The imaging characteristics of these radionuclides result in gamma camera image quality that is inferior to that of /sup 99m/Tc images. Thus, in this investigation /sup 111/In and /sup 123/I contaminated with approximately 4% /sup 124/I were chosen to test the hypothesis that a dramatic improvement in planar and SPECT images may be obtainable with digital image restoration. The count-dependent Metz filter is shown to be able to deconvolve the rapid drop at low spatial frequencies in the imaging system modulation transfer function (MTF) resulting from the acceptance of septal penetration and scatter in the camera window. Use of the Metz filter was found to result in improved spatial resolution as measured by both the full width at half maximum and full width at tenth maximum for both planar and SPECT studies. Two-dimensional, prereconstruction filtering with optimized Metz filters was also determined to improve image contrast, while decreasing the noise level for SPECT studies. A dramatic improvement in image quality was observed with the clinical application of this filter to SPECT imaging.

  4. Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1992-01-01

    Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

  5. Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Qiu, Ping

    2015-05-01

    Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration.

  6. Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy.

    PubMed

    Wang, Ke; Qiu, Ping

    2015-05-01

    Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration. PMID:25950644

  7. Optimal color filter array design: quantitative conditions and an efficient search procedure

    NASA Astrophysics Data System (ADS)

    Lu, Yue M.; Vetterli, Martin

    2009-01-01

    Most digital cameras employ a spatial subsampling process, implemented as a color filter array (CFA), to capture color images. The choice of CFA patterns has a great impact on the performance of subsequent reconstruction (demosaicking) algorithms. In this work, we propose a quantitative theory for optimal CFA design. We view the CFA sampling process as an encoding (low-dimensional approximation) operation and, correspondingly, demosaicking as the best decoding (reconstruction) operation. Finding the optimal CFA is thus equivalent to finding the optimal approximation scheme for the original signals with minimum information loss. We present several quantitative conditions for optimal CFA design, and propose an efficient computational procedure to search for the best CFAs that satisfy these conditions. Numerical experiments show that the optimal CFA patterns designed from the proposed procedure can effectively retain the information of the original full-color images. In particular, with the designed CFA patterns, high quality demosaicking can be achieved by using simple and efficient linear filtering operations in the polyphase domain. The visual qualities of the reconstructed images are competitive to those obtained by the state-of-the-art adaptive demosaicking algorithms based on the Bayer pattern.

  8. Creation of an iOS and Android Mobile Application for Inferior Vena Cava (IVC) Filters: A Powerful Tool to Optimize Care of Patients with IVC Filters.

    PubMed

    Deso, Steven E; Idakoji, Ibrahim A; Muelly, Michael C; Kuo, William T

    2016-06-01

    Owing to a myriad of inferior vena cava (IVC) filter types and their potential complications, rapid and correct identification may be challenging when encountered on routine imaging. The authors aimed to develop an interactive mobile application that allows recognition of all IVC filters and related complications, to optimize the care of patients with indwelling IVC filters. The FDA Premarket Notification Database was queried from 1980 to 2014 to identify all IVC filter types in the United States. An electronic search was then performed on MEDLINE and the FDA MAUDE database to identify all reported complications associated with each device. High-resolution photos were taken of each filter type and corresponding computed tomographic and fluoroscopic images were obtained from an institutional review board-approved IVC filter registry. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code. The software was deployed using PhoneGap (Adobe, San Jose, CA), and the prototype was tested and refined. Twenty-three IVC filter types were identified for inclusion. Safety data from FDA MAUDE and 72 relevant peer-reviewed studies were acquired, and complication rates for each filter type were highlighted in the application. Digital photos, fluoroscopic images, and CT DICOM files were seamlessly incorporated. All data were succinctly organized electronically, and the software was successfully deployed into Android (Google, Mountain View, CA) and iOS (Apple, Cupertino, CA) platforms. A powerful electronic mobile application was successfully created to allow rapid identification of all IVC filter types and related complications. This application may be used to optimize the care of patients with IVC filters. PMID:27247483

  9. Optimization of ecosystem model parameters with different temporal variabilities using tower flux data and an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.

    2013-12-01

    Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that

  10. Implicit application of polynomial filters in a k-step Arnoldi method

    NASA Technical Reports Server (NTRS)

    Sorensen, D. C.

    1990-01-01

    The Arnoldi process is a well known technique for approximating a few eigenvalues and corresponding eigenvectors of a general square matrix. Numerical difficulties such as loss of orthogonality and assessment of the numerical quality of the approximations as well as a potential for unbounded growth in storage have limited the applicability of the method. These issues are addressed by fixing the number of steps in the Arnoldi process at a prescribed value k and then treating the residual vector as a function of the initial Arnoldi vector. This starting vector is then updated through an iterative scheme that is designed to force convergence of the residual to zero. The iterative scheme is shown to be a truncation of the standard implicitly shifted QR-iteration for dense problems and it avoids the need to explicitly restart the Arnoldi sequence. The main emphasis of this paper is on the derivation and analysis of this scheme. However, there are obvious ways to exploit parallelism through the matrix-vector operations that comprise the majority of the work in the algorithm. Preliminary computational results are given for a few problems on some parallel and vector computers.

  11. An optimized item-based collaborative filtering recommendation algorithm based on item genre prediction

    NASA Astrophysics Data System (ADS)

    Zhang, De-Jia

    2009-07-01

    With the fast development of Internet, many systems have emerged in e-commerce applications to support the product recommendation. Collaborative filtering is one of the most promising techniques in recommender systems, providing personalized recommendations to users based on their previously expressed preferences in the form of ratings and those of other similar users. In practice, with the adding of user and item scales, user-item ratings are becoming extremely sparsity and recommender systems utilizing traditional collaborative filtering are facing serious challenges. To address the issue, this paper presents an approach to compute item genre similarity, through mapping each item with a corresponding descriptive genre, and computing similarity between genres as similarity, then make basic predictions according to those similarities to lower sparsity of the user-item ratings. After that, item-based collaborative filtering steps are taken to generate predictions. Compared with previous methods, the presented collaborative filtering employs the item genre similarity can alleviate the sparsity issue in the recommender systems, and can improve accuracy of recommendation.

  12. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647

  13. Optimization of single-step tapering amplitude and energy detuning for high-gain FELs

    NASA Astrophysics Data System (ADS)

    Li, He-Ting; Jia, Qi-Ka

    2015-01-01

    We put forward a method to optimize the single-step tapering amplitude of undulator strength and initial energy tuning of electron beam to maximize the saturation power of high gain free-electron lasers (FELs), based on the physics of longitudinal electron beam phase space. Using the FEL simulation code GENESIS, we numerically demonstrate the accuracy of the estimations for parameters corresponding to the linac coherent light source and the Tesla test facility.

  14. Combining segment generation with direct step-and-shoot optimization in intensity-modulated radiation therapy

    SciTech Connect

    Carlsson, Fredrik

    2008-09-15

    A method for generating a sequence of intensity-modulated radiation therapy step-and-shoot plans with increasing number of segments is presented. The objectives are to generate high-quality plans with few, large and regular segments, and to make the planning process more intuitive. The proposed method combines segment generation with direct step-and-shoot optimization, where leaf positions and segment weights are optimized simultaneously. The segment generation is based on a column generation approach. The method is evaluated on a test suite consisting of five head-and-neck cases and five prostate cases, planned for delivery with an Elekta SLi accelerator. The adjustment of segment shapes by direct step-and-shoot optimization improves the plan quality compared to using fixed segment shapes. The improvement in plan quality when adding segments is larger for plans with few segments. Eventually, adding more segments contributes very little to the plan quality, but increases the plan complexity. Thus, the method provides a tool for controlling the number of segments and, indirectly, the delivery time. This can support the planner in finding a sound trade-off between plan quality and treatment complexity.

  15. Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario

    NASA Astrophysics Data System (ADS)

    Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.

    2009-12-01

    Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.

  16. Novel tools for stepping source brachytherapy treatment planning: Enhanced geometrical optimization and interactive inverse planning

    SciTech Connect

    Dinkla, Anna M. Laarse, Rob van der; Koedooder, Kees; Petra Kok, H.; Wieringen, Niek van; Pieters, Bradley R.; Bel, Arjan

    2015-01-15

    Purpose: Dose optimization for stepping source brachytherapy can nowadays be performed using automated inverse algorithms. Although much quicker than graphical optimization, an experienced treatment planner is required for both methods. With automated inverse algorithms, the procedure to achieve the desired dose distribution is often based on trial-and-error. Methods: A new approach for stepping source prostate brachytherapy treatment planning was developed as a quick and user-friendly alternative. This approach consists of the combined use of two novel tools: Enhanced geometrical optimization (EGO) and interactive inverse planning (IIP). EGO is an extended version of the common geometrical optimization method and is applied to create a dose distribution as homogeneous as possible. With the second tool, IIP, this dose distribution is tailored to a specific patient anatomy by interactively changing the highest and lowest dose on the contours. Results: The combined use of EGO–IIP was evaluated on 24 prostate cancer patients, by having an inexperienced user create treatment plans, compliant to clinical dose objectives. This user was able to create dose plans of 24 patients in an average time of 4.4 min/patient. An experienced treatment planner without extensive training in EGO–IIP also created 24 plans. The resulting dose-volume histogram parameters were comparable to the clinical plans and showed high conformance to clinical standards. Conclusions: Even for an inexperienced user, treatment planning with EGO–IIP for stepping source prostate brachytherapy is feasible as an alternative to current optimization algorithms, offering speed, simplicity for the user, and local control of the dose levels.

  17. Spatial join optimization among WFSs based on recursive partitioning and filtering rate estimation

    NASA Astrophysics Data System (ADS)

    Lan, Guiwen; Wu, Congcong; Shi, Guangyi; Chen, Qi; Yang, Zhao

    2015-12-01

    Spatial join among Web Feature Services (WFS) is time-consuming for most of non-candidate spatial objects may be encoded by GML and transferred to client side. In this paper, an optimization strategy is proposed to enhance performance of these joins by filtering non-candidate spatial objects as many as possible. By recursive partitioning, the data skew of sub-areas is facilitated to reduce data transmission using spatial semi-join. Moreover filtering rate is used to determine whether a spatial semi-join for a sub-area is profitable and choose a suitable execution plan for it. The experimental results show that the proposed strategy is feasible under most circumstances.

  18. Transdermal film-loaded finasteride microplates to enhance drug skin permeation: Two-step optimization study.

    PubMed

    Ahmed, Tarek A; El-Say, Khalid M

    2016-06-10

    The goal was to develop an optimized transdermal finasteride (FNS) film loaded with drug microplates (MIC), utilizing two-step optimization, to decrease the dosing schedule and inconsistency in gastrointestinal absorption. First; 3-level factorial design was implemented to prepare optimized FNS-MIC of minimum particle size. Second; Box-Behnken design matrix was used to develop optimized transdermal FNS-MIC film. Interaction among MIC components was studied using physicochemical characterization tools. Film components namely; hydroxypropyl methyl cellulose (X1), dimethyl sulfoxide (X2) and propylene glycol (X3) were optimized for their effects on the film thickness (Y1) and elongation percent (Y2), and for FNS steady state flux (Y3), permeability coefficient (Y4), and diffusion coefficient (Y5) following ex-vivo permeation through the rat skin. Morphological study of the optimized MIC and transdermal film was also investigated. Results revealed that stabilizer concentration and anti-solvent percent were significantly affecting MIC formulation. Optimized FNS-MIC of particle size 0.93μm was successfully prepared in which there was no interaction observed among their components. An enhancement in the aqueous solubility of FNS-MIC by more than 23% was achieved. All the studied variables, most of their interaction and quadratic effects were significantly affecting the studied variables (Y1-Y5). Morphological observation illustrated non-spherical, short rods, flakes like small plates that were homogeneously distributed in the optimized transdermal film. Ex-vivo study showed enhanced FNS permeation from film loaded MIC when compared to that contains pure drug. So, MIC is a successful technique to enhance aqueous solubility and skin permeation of poor water soluble drug especially when loaded into transdermal films. PMID:26993962

  19. Optimal discrete-time H∞/γ0 filtering and control under unknown covariances

    NASA Astrophysics Data System (ADS)

    Kogan, Mark M.

    2016-04-01

    New stochastic γ0 and mixed H∞/γ0 filtering and control problems for discrete-time systems under completely unknown covariances are introduced and solved. The performance measure γ0 is the worst-case steady-state averaged variance of the error signal in response to the stationary Gaussian white zero-mean disturbance with unknown covariance and identity variance. The performance measure H∞/γ0 is the worst-case power norm of the error signal in response to two input disturbances in different channels, one of which is the deterministic signal with a bounded energy and the other is the stationary Gaussian white zero-mean signal with a bounded variance provided the weighting sum of disturbance powers equals one. In this framework, it is possible to consider at the same time both deterministic and stochastic disturbances highlighting their mutual effects. Our main results provide the complete characterisations of the above performance measures in terms of linear matrix inequalities and therefore both the γ0 and H∞/γ0 optimal filters and controllers can be computed by convex programming. H∞/γ0 optimal solution is shown to be actually a trade-off between optimal solutions to the H∞ and γ0 problems for the corresponding channels.

  20. Optimized model of oriented-line-target detection using vertical and horizontal filters

    NASA Astrophysics Data System (ADS)

    Westland, Stephen; Foster, David H.

    1995-08-01

    A line-element target differing sufficiently in orientation from a background of line elements can be visually detected easily and quickly; orientation thresholds for such detection are lowest when the background elements are all vertical or all horizontal. A simple quantitative model of this performance was constructed from two classes of anisotropic filters, (2) nonlinear point transformation, and (3) estimation of a signal-to-noise ratio based on responses to images with and without a target. A Monte Carlo optimization procedure (simulated annealing) was used to determine the model parameter values required for providing an accurate description of psychophysical data on orientation increment thresholds.

  1. Facile, green and clean one-step synthesis of carbon dots from wool: Application as a sensor for glyphosate detection based on the inner filter effect.

    PubMed

    Wang, Long; Bi, Yidan; Hou, Juan; Li, Huiyu; Xu, Yuan; Wang, Bo; Ding, Hong; Ding, Lan

    2016-11-01

    In this work, we reported a green route for the fabrication of fluorescent carbon dots (CDs). Wool, a kind of nontoxic and natural raw material, was chosen as the precursor to prepare CDs via a one-step microwave-assisted pyrolysis process. Compared with previously reported methods for preparation of CDs based on biomass materials, this method was simple, facile and free of any additives, such as acids, bases, or salts, which avoid the complicated post-treatment process to purify the CDs. The CDs have a high quantum yield (16.3%) and their fluorescence could be quenched by silver nanoparticles (AgNPs) based on inner filter effect (IFE). The presence of glyphosate could induce the aggregation of AgNPs and thus result in the fluorescence recovery of the quenched CDs. Based on this phenomenon, we constructed a fluorescence system (CDs/AgNPs) for determination of glyphosate. Under the optimized conditions, the fluorescence intensity of the CDs/AgNPs system was proportional to the concentration of glyphosate in the range of 0.025-2.5μgmL(-1), with a detection limit of 12ngmL(-1). Furthermore, the established method has been successfully used for glyphosate detection in the cereal samples with satisfactory results. PMID:27591613

  2. Energetic optimization of ion conduction rate by the K+ selectivity filter

    NASA Astrophysics Data System (ADS)

    Morais-Cabral, João H.; Zhou, Yufeng; MacKinnon, Roderick

    2001-11-01

    The K+ selectivity filter catalyses the dehydration, transfer and rehydration of a K+ ion in about ten nanoseconds. This physical process is central to the production of electrical signals in biology. Here we show how nearly diffusion-limited rates are achieved, by analysing ion conduction and the corresponding crystallographic ion distribution in the selectivity filter of the KcsA K+ channel. Measurements with K+ and its slightly larger analogue, Rb+, lead us to conclude that the selectivity filter usually contains two K+ ions separated by one water molecule. The two ions move in a concerted fashion between two configurations, K+-water-K+-water (1,3 configuration) and water-K+-water-K+ (2,4 configuration), until a third ion enters, displacing the ion on the opposite side of the queue. For K+, the energy difference between the 1,3 and 2,4 configurations is close to zero, the condition of maximum conduction rate. The energetic balance between these configurations is a clear example of evolutionary optimization of protein function.

  3. Modified patch-based locally optimal Wiener method for interferometric SAR phase filtering

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing

    2016-04-01

    This paper presents a modified patch-based locally optimal Wiener (PLOW) method for interferometric synthetic aperture radar (InSAR) phase filtering. PLOW is a linear minimum mean squared error (LMMSE) estimator based on a Gaussian additive noise condition. It jointly estimates moments, including mean and covariance, using a non-local technique. By using similarities between image patches, this method can effectively filter noise while preserving details. When applied to InSAR phase filtering, three modifications are proposed based on spatial variant noise. First, pixels are adaptively clustered according to their coherence magnitudes. Second, rather than a global estimator, a locally adaptive estimator is used to estimate noise covariance. Third, using the coherence magnitudes as weights, the mean of each cluster is estimated, using a weighted mean to further reduce noise. The performance of the proposed method is experimentally verified using simulated and real data. The results of our study demonstrate that the proposed method is on par or better than the non-local interferometric SAR (NL-InSAR) method.

  4. Optimization of Signal Decomposition Matched Filtering (SDMF) for Improved Detection of Copy-Number Variations.

    PubMed

    Stamoulis, Catherine; Betensky, Rebecca A

    2016-01-01

    We aim to improve the performance of the previously proposed signal decomposition matched filtering (SDMF) method [26] for the detection of copy-number variations (CNV) in the human genome. Through simulations, we show that the modified SDMF is robust even at high noise levels and outperforms the original SDMF method, which indirectly depends on CNV frequency. Simulations are also used to develop a systematic approach for selecting relevant parameter thresholds in order to optimize sensitivity, specificity and computational efficiency. We apply the modified method to array CGH data from normal samples in the cancer genome atlas (TCGA) and compare detected CNVs to those estimated using circular binary segmentation (CBS) [19], a hidden Markov model (HMM)-based approach [11] and a subset of CNVs in the Database of Genomic Variants. We show that a substantial number of previously identified CNVs are detected by the optimized SDMF, which also outperforms the other two methods. PMID:27295643

  5. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    PubMed

    Suleimanov, Yury V; Green, William H

    2015-09-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes. PMID:26575920

  6. A simple procedure eliminating multiple optimization steps required in developing multiplex PCR reactions

    SciTech Connect

    Grondin, V.; Roskey, M.; Klinger, K.; Shuber, T.

    1994-09-01

    The PCR technique is one of the most powerful tools in modern molecular genetics and has achieved widespread use in the analysis of genetic diseases. Typically, a region of interest is amplified from genomic DNA or cDNA and examined by various methods of analysis for mutations or polymorphisms. In cases of small genes and transcripts, amplification of single, small regions of DNA are sufficient for analysis. However, when analyzing large genes and transcripts, multiple PCRs may be required to identify the specific mutation or polymorphism of interest. Ever since it has been shown that PCR could simultaneously amplify multiple loci in the human dystrophin gene, multiplex PCR has been established as a general technique. The properities of multiplex PCR make it a useful tool and preferable to simultaneous uniplex PCR in many instances. However, the steps for developing a multiplex PCR can be laborious, with significant difficulty in achieving equimolar amounts of several different amplicons. We have developed a simple method of primer design that has enabled us to eliminate a number of the standard optimization steps required in developing a multiplex PCR. Sequence-specific oligonucleotide pairs were synthesized for the simultaneous amplification of multiple exons within the CFTR gene. A common non-complementary 20 nucleotide sequence was attached to each primer, thus creating a mixture of primer pairs all containing a universal primer sequence. Multiplex PCR reactions were carried out containing target DNA, a mixture of several chimeric primer pairs and primers complementary to only the universal portion of the chimeric primers. Following optimization of conditions for the universal primer, limited optimization was needed for successful multiplex PCR. In contrast, significant optimization of the PCR conditions were needed when pairs of sequence specific primers were used together without the universal sequence.

  7. Towards Optimal Filtering on ARM for ATLAS Tile Calorimeter Front-End Processing

    NASA Astrophysics Data System (ADS)

    Cox, Mitchell A.

    2015-10-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. Advanced and characteristically expensive Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are currently used to process this quantity of data. It is proposed that a cost- effective, high data throughput Processing Unit (PU) can be developed by using several ARM System on Chips in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. ARM is a cost effective and energy efficient alternative CPU architecture to the long established x86 architecture. This PU could be used for a variety of high-level algorithms on the high data throughput raw data. An Optimal Filtering algorithm has been implemented in C++ and several ARM platforms have been tested. Optimal Filtering is currently used in the ATLAS Tile Calorimeter front-end for basic energy reconstruction and is currently implemented on DSPs.

  8. First laboratory demonstration of closed-loop Kalman based optimal control for vibration filtering and simplified MCAO

    NASA Astrophysics Data System (ADS)

    Petit, C.; Conan, J.-M.; Kulcsár, C.; Raynaud, H.-F.; Fusco, T.; Montri, J.; Rabaud, D.

    2006-06-01

    Classic Adaptive Optics (AO) is now successfully implemented on a growing number of ground-based imaging systems. Nevertheless some limitations are still to cope with. First, the AO standard control laws are unable to easily handle vibrations. In the particular case of eXtreme AO (XAO), which requires a highly efficient AO, these vibrations can thus be much penalizing. We have previously shown that a Kalman based control law can provide both an efficient correction of the turbulence and a strong vibration filtering. Second, anisoplanatism effects lead to a small corrected field of view. Multi-Conjugate AO (MCAO) is a promising concept that should increase significantly this field of view. We have shown numerically that MCAO correction can be highly improved by optimal control based on a Kalman filter. This article presents the first laboratory demonstration of these two concepts. We use a classic AO bench available at Onera with a deformable mirror (DM) in the pupil and a Shack-Hartmann Wave Front Sensor (WFS) pointing at an on-axis guide-star. The turbulence is produced by a rotating phase screen in altitude. First, this AO configuration is used to validate the ability of our control approach to filter out system vibrations and improve the overall performance of the AO closed-loop, compared to classic controllers. The consequences on the RTC design of an XAO system is discussed. Then, we optimize the correction for an off-axis star although the WFS still points at the on-axis star. This Off-Axis AO (OAAO) can be seen as a first step towards MCAO or Multi-Object AO in a simplified configuration. It proves the ability of our control law to estimate the turbulence in altitude and correct in the direction of interest. We describe the off-axis correction tests performed in a dynamic mode (closed-loop) using our Kalman based control. We present the evolution of the off-axis correction according to the angular separation between the stars. A highly significant

  9. Statistical efficiency and optimal design for stepped cluster studies under linear mixed effects models.

    PubMed

    Girling, Alan J; Hemming, Karla

    2016-06-15

    In stepped cluster designs the intervention is introduced into some (or all) clusters at different times and persists until the end of the study. Instances include traditional parallel cluster designs and the more recent stepped-wedge designs. We consider the precision offered by such designs under mixed-effects models with fixed time and random subject and cluster effects (including interactions with time), and explore the optimal choice of uptake times. The results apply both to cross-sectional studies where new subjects are observed at each time-point, and longitudinal studies with repeat observations on the same subjects. The efficiency of the design is expressed in terms of a 'cluster-mean correlation' which carries information about the dependency-structure of the data, and two design coefficients which reflect the pattern of uptake-times. In cross-sectional studies the cluster-mean correlation combines information about the cluster-size and the intra-cluster correlation coefficient. A formula is given for the 'design effect' in both cross-sectional and longitudinal studies. An algorithm for optimising the choice of uptake times is described and specific results obtained for the best balanced stepped designs. In large studies we show that the best design is a hybrid mixture of parallel and stepped-wedge components, with the proportion of stepped wedge clusters equal to the cluster-mean correlation. The impact of prior uncertainty in the cluster-mean correlation is considered by simulation. Some specific hybrid designs are proposed for consideration when the cluster-mean correlation cannot be reliably estimated, using a minimax principle to ensure acceptable performance across the whole range of unknown values. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:26748662

  10. Determination Method for Optimal Installation of Active Filters in Distribution Network with Distributed Generation

    NASA Astrophysics Data System (ADS)

    Kawasaki, Shoji; Hayashi, Yasuhiro; Matsuki, Junya; Kikuya, Hirotaka; Hojo, Masahide

    Recently, the harmonic troubles in a distribution network are worried in the background of the increase of the connection of distributed generation (DG) and the spread of the power electronics equipments. As one of the strategies, control the harmonic voltage by installing an active filter (AF) has been researched. In this paper, the authors propose a computation method to determine the optimal allocations, gains and installation number of AFs so as to minimize the maximum value of voltage total harmonic distortion (THD) for a distribution network with DGs. The developed method is based on particle swarm optimization (PSO) which is one of the nonlinear optimization methods. Especially, in this paper, the case where the harmonic voltage or the harmonic current in a distribution network is assumed by connecting many DGs through the inverters, and the authors propose a determination method of the optimal allocation and gain of AF that has the harmonic restrictive effect in the whole distribution network. Moreover, the authors propose also about a determination method of the necessary minimum installation number of AFs, by taking into consideration also about the case where the target value of harmonic suppression cannot be reached, by one set only of AF. In order to verify the validity and effectiveness of the proposed method, the numerical simulations are carried out by using an analytical model of distribution network with DGs.

  11. Rod-filter-field optimization of the J-PARC RF-driven H- ion source

    NASA Astrophysics Data System (ADS)

    Ueno, A.; Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2015-04-01

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H- ion beam of 60mA within normalized emittances of 1.5πmm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500μs×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H- ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H- ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM's gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H- ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM's cross-section (magnetmotive force) was indispensable for easy operation with the temperature (TPE) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM's cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for TPE around 60°C.

  12. Real-time defect detection of steel wire rods using wavelet filters optimized by univariate dynamic encoding algorithm for searches.

    PubMed

    Yun, Jong Pil; Jeon, Yong-Ju; Choi, Doo-chul; Kim, Sang Woo

    2012-05-01

    We propose a new defect detection algorithm for scale-covered steel wire rods. The algorithm incorporates an adaptive wavelet filter that is designed on the basis of lattice parameterization of orthogonal wavelet bases. This approach offers the opportunity to design orthogonal wavelet filters via optimization methods. To improve the performance and the flexibility of wavelet design, we propose the use of the undecimated discrete wavelet transform, and separate design of column and row wavelet filters but with a common cost function. The coefficients of the wavelet filters are optimized by the so-called univariate dynamic encoding algorithm for searches (uDEAS), which searches the minimum value of a cost function designed to maximize the energy difference between defects and background noise. Moreover, for improved detection accuracy, we propose an enhanced double-threshold method. Experimental results for steel wire rod surface images obtained from actual steel production lines show that the proposed algorithm is effective. PMID:22561939

  13. Ultra-Compact Broadband High-Spurious Suppression Bandpass Filter Using Double Split-end Stepped Impedance Resonators

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop; Wollack, Ed; Papapolymerou, John; Laskar, Joy

    2005-01-01

    We propose an ultra compact single-layer spurious suppression band pass filter design which has the following benefit: 1) Effective coupling area can be increased with no fabrication limitation and no effect on the spurious response; 2) Two fundamental poles are introduced to suppress spurs; 3) Filter can be designed with up to 30% bandwidth; 4) The Filter length is reduced by at least 100% when compared to the conventional filter; 5) Spurious modes are suppressed up to at the seven times the fundamental frequency; and 6) It uses only one layer of metallization which minimize the fabrication cost.

  14. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  15. Graphics-processor-unit-based parallelization of optimized baseline wander filtering algorithms for long-term electrocardiography.

    PubMed

    Niederhauser, Thomas; Wyss-Balmer, Thomas; Haeberlin, Andreas; Marisa, Thanks; Wildhaber, Reto A; Goette, Josef; Jacomet, Marcel; Vogel, Rolf

    2015-06-01

    Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here, we present a graphics processor unit (GPU)-based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to autoregressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and four times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a seven-day high-resolution ECG is computed within less than 3 s. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced. PMID:25675449

  16. Optimized design of high-order series coupler Yb3+/Er3+ codoped phosphate glass microring resonator filters

    NASA Astrophysics Data System (ADS)

    Galatus, Ramona; Valles, Juan

    2016-04-01

    The optimized geometry based on high-order active microring resonators (MRR) geometry is proposed. The solution possesses both the filtering and amplifying functions for the signal at around 1534nm (pump 976 nm). The cross-grid resonator with laterally, series-coupled triple-microrings, having 15.35μm radius, in a co-propagation topology between signal and pump, is the structure under analysis (commonly termed an add-drop filter).

  17. Optimization of spectral filtering parameters of acousto-optic pure rotational Raman lidar for atmospheric temperature profiling

    NASA Astrophysics Data System (ADS)

    Zhu, Jianhua; Wan, Lei; Nie, Guosheng; Guo, Xiaowei

    2003-12-01

    In this paper, as far as we know, it is the first time that a novel acousto-optic pure rotational Raman lidar based on acousto-optic tunable filter (AOTF) is put forward for the application of atmospheric temperature measurements. AOTF is employed in the novel lidar system as narrow band-pass filter and high-speed single-channel wavelength scanner. This new acousto-optic filtering technique can solve the problems of conventional pure rotational Raman lidar, e.g., low temperature detection sensitivity, untunability of filtering parameters, and signal interference between different detection channels. This paper will focus on the PRRS physical model calculation and simulation optimization of system parameters such as the central wavelengths and the bandwidths of filtering operation, and the required sensitivity. The theoretical calculations and optimization of AOTF spectral filtering parameters are conducted to achieve high temperature dependence and sensitivity, high signal intensities, high temperature of filtered spectral passbands, and adequate blocking of elastic Mie and Rayleigh scattering signals. The simulation results can provide suitable proposal and theroetical evaluation before the integration of a practical Raman lidar system.

  18. Reliably Detecting Clinically Important Variants Requires Both Combined Variant Calls and Optimized Filtering Strategies.

    PubMed

    Field, Matthew A; Cho, Vicky; Andrews, T Daniel; Goodnow, Chris C

    2015-01-01

    A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality 'genome in a bottle' reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436

  19. Reliably Detecting Clinically Important Variants Requires Both Combined Variant Calls and Optimized Filtering Strategies

    PubMed Central

    Field, Matthew A.; Cho, Vicky

    2015-01-01

    A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality ‘genome in a bottle’ reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436

  20. A general sequential Monte Carlo method based optimal wavelet filter: A Bayesian approach for extracting bearing fault features

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Sun, Shilong; Tse, Peter W.

    2015-02-01

    A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.

  1. Application of digital tomosynthesis (DTS) of optimal deblurring filters for dental X-ray imaging

    NASA Astrophysics Data System (ADS)

    Oh, J. E.; Cho, H. S.; Kim, D. S.; Choi, S. I.; Je, U. K.

    2012-04-01

    Digital tomosynthesis (DTS) is a limited-angle tomographic technique that provides some of the tomographic benefits of computed tomography (CT) but at reduced dose and cost. Thus, the potential for application of DTS to dental X-ray imaging seems promising. As a continuation of our dental radiography R&D, we developed an effective DTS reconstruction algorithm and implemented it in conjunction with a commercial dental CT system for potential use in dental implant placement. The reconstruction algorithm employed a backprojection filtering (BPF) method based upon optimal deblurring filters to suppress effectively both the blur artifacts originating from the out-focus planes and the high-frequency noise. To verify the usefulness of the reconstruction algorithm, we performed systematic simulation works and evaluated the image characteristics. We also performed experimental works in which DTS images of enhanced anatomical resolution were successfully obtained by using the algorithm and were promising to our ongoing applications to dental X-ray imaging. In this paper, our approach to the development of the DTS reconstruction algorithm and the results are described in detail.

  2. An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter

    NASA Astrophysics Data System (ADS)

    Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning

    2015-08-01

    An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/με.

  3. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  4. Theoretical optimal modulation frequencies for scattering parameter estimation and ballistic photon filtering in diffusing media.

    PubMed

    Panigrahi, Swapnesh; Fade, Julien; Ramachandran, Hema; Alouini, Mehdi

    2016-07-11

    The efficiency of using intensity modulated light for the estimation of scattering properties of a turbid medium and for ballistic photon discrimination is theoretically quantified in this article. Using the diffusion model for modulated photon transport and considering a noisy quadrature demodulation scheme, the minimum-variance bounds on estimation of parameters of interest are analytically derived and analyzed. The existence of a variance-minimizing optimal modulation frequency is shown and its evolution with the properties of the intervening medium is derived and studied. Furthermore, a metric is defined to quantify the efficiency of ballistic photon filtering which may be sought when imaging through turbid media. The analytical derivation of this metric shows that the minimum modulation frequency required to attain significant ballistic discrimination depends only on the reduced scattering coefficient of the medium in a linear fashion for a highly scattering medium. PMID:27410875

  5. Convex optimization-based windowed Fourier filtering with multiple windows for wrapped-phase denoising.

    PubMed

    Yatabe, Kohei; Oikawa, Yasuhiro

    2016-06-10

    The windowed Fourier filtering (WFF), defined as a thresholding operation in the windowed Fourier transform (WFT) domain, is a successful method for denoising a phase map and analyzing a fringe pattern. However, it has some shortcomings, such as extremely high redundancy, which results in high computational cost, and difficulty in selecting an appropriate window size. In this paper, an extension of WFF for denoising a wrapped-phase map is proposed. It is formulated as a convex optimization problem using Gabor frames instead of WFT. Two Gabor frames with differently sized windows are used simultaneously so that the above-mentioned issues are resolved. In addition, a differential operator is combined with a Gabor frame in order to preserve discontinuity of the underlying phase map better. Some numerical experiments demonstrate that the proposed method is able to reconstruct a wrapped-phase map, even for a severely contaminated situation. PMID:27409020

  6. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: Validation on phantoms

    SciTech Connect

    Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen

    2009-01-15

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  7. Optimal synthesis of double-phase computer generated holograms using a phase-only spatial light modulator with grating filter.

    PubMed

    Song, Hoon; Sung, Geeyoung; Choi, Sujin; Won, Kanghee; Lee, Hong-Seok; Kim, Hwi

    2012-12-31

    We propose an optical system for synthesizing double-phase complex computer-generated holograms using a phase-only spatial light modulator and a phase grating filter. Two separated areas of the phase-only spatial light modulator are optically superposed by 4-f configuration with an optimally designed grating filter to synthesize arbitrary complex optical field distributions. The tolerances related to misalignment factors are analyzed, and the optimal synthesis method of double-phase computer-generated holograms is described. PMID:23388811

  8. Filter-feeding and cruising swimming speeds of basking sharks compared with optimal models: they filter-feed slower than predicted for their size.

    PubMed

    Sims

    2000-06-01

    Movements of six basking sharks (4.0-6.5 m total body length, L(T)) swimming at the surface were tracked and horizontal velocities determined. Sharks were tracked for between 1.8 and 55 min with between 4 and 21 mean speed determinations per shark track. The mean filter-feeding swimming speed was 0.85 m s(-1) (+/-0.05 S.E., n=49 determinations) compared to the non-feeding (cruising) mean speed of 1.08 m s(-1) (+/-0.03 S.E., n=21 determinations). Both absolute (m s(-1)) and specific (L s(-1)) swimming speeds during filter-feeding were significantly lower than when cruise swimming with the mouth closed, indicating basking sharks select speeds approximately 24% lower when engaged in filter-feeding. This reduction in speed during filter-feeding could be a behavioural response to avoid increased drag-induced energy costs associated with feeding at higher speeds. Non-feeding basking sharks (4 m L(T)) cruised at speeds close to, but slightly faster ( approximately 18%) than the optimum speed predicted by the Weihs (1977) [Weihs, D., 1977. Effects of size on the sustained swimming speeds of aquatic organisms. In: Pedley, T.J. (Ed.), Scale Effects in Animal Locomotion. Academic Press, London, pp. 333-338.] optimal cruising speed model. In contrast, filter-feeding basking sharks swam between 29 and 39% slower than the speed predicted by the Weihs and Webb (1983) [Weihs, D., Webb, P.W., 1983. Optimization of locomotion. In: Webb, P.W., Weihs, D. (Eds.), Fish Biomechanics. Praeger, New York, pp. 339-371.] optimal filter-feeding model. This significant under-estimation in observed feeding speed compared to model predictions was most likely accounted for by surface drag effects reducing optimum speeds of tracked sharks, together with inaccurate parameter estimates used in the general model to predict optimal speeds of basking sharks from body size extrapolations. PMID:10817828

  9. Optimization of a one-step heat-inducible in vivo mini DNA vector production system.

    PubMed

    Nafissi, Nafiseh; Sum, Chi Hong; Wettig, Shawn; Slavcev, Roderick A

    2014-01-01

    While safer than their viral counterparts, conventional circular covalently closed (CCC) plasmid DNA vectors offer a limited safety profile. They often result in the transfer of unwanted prokaryotic sequences, antibiotic resistance genes, and bacterial origins of replication that may lead to unwanted immunostimulatory responses. Furthermore, such vectors may impart the potential for chromosomal integration, thus potentiating oncogenesis. Linear covalently closed (LCC), bacterial sequence free DNA vectors have shown promising clinical improvements in vitro and in vivo. However, the generation of such minivectors has been limited by in vitro enzymatic reactions hindering their downstream application in clinical trials. We previously characterized an in vivo temperature-inducible expression system, governed by the phage λ pL promoter and regulated by the thermolabile λ CI[Ts]857 repressor to produce recombinant protelomerase enzymes in E. coli. In this expression system, induction of recombinant protelomerase was achieved by increasing culture temperature above the 37°C threshold temperature. Overexpression of protelomerase led to enzymatic reactions, acting on genetically engineered multi-target sites called "Super Sequences" that serve to convert conventional CCC plasmid DNA into LCC DNA minivectors. Temperature up-shift, however, can result in intracellular stress responses and may alter plasmid replication rates; both of which may be detrimental to LCC minivector production. We sought to optimize our one-step in vivo DNA minivector production system under various induction schedules in combination with genetic modifications influencing plasmid replication, processing rates, and cellular heat stress responses. We assessed different culture growth techniques, growth media compositions, heat induction scheduling and temperature, induction duration, post-induction temperature, and E. coli genetic background to improve the productivity and scalability of our system

  10. The first on-site evaluation of a new filter optimized for TARC and developer

    NASA Astrophysics Data System (ADS)

    Umeda, Toru; Ishibashi, Takeo; Nakamura, Atsushi; Ide, Junichi; Nagano, Masaru; Omura, Koichi; Tsuzuki, Shuichi; Numaguchi, Toru

    2008-11-01

    In previous studies, we identified filter properties that have a strong effect on microbubble formation on the downstream side of the filter membrane. A new Highly Asymmetric Polyarylsulfone (HAPAS) filter was developed based on the findings. In the current study, we evaluated newly-developed HAPAS filter in environmentally preferred non-PFOS TARC in a laboratory setting. Test results confirmed that microbubble counts downstream of the filter were lower than those of a conventional HDPE filter. Further testing in a manufacturing environment confirmed that HAPAS filtration of TARC at point of use was able to reduce defectivity caused by microbubbles on both unpatterned and patterned wafers, compared with a HDPE filter.

  11. Fast Automatic Step Size Estimation for Gradient Descent Optimization of Image Registration.

    PubMed

    Qiao, Yuchuan; van Lew, Baldur; Lelieveldt, Boudewijn P F; Staring, Marius

    2016-02-01

    Fast automatic image registration is an important prerequisite for image-guided clinical procedures. However, due to the large number of voxels in an image and the complexity of registration algorithms, this process is often very slow. Stochastic gradient descent is a powerful method to iteratively solve the registration problem, but relies for convergence on a proper selection of the optimization step size. This selection is difficult to perform manually, since it depends on the input data, similarity measure and transformation model. The Adaptive Stochastic Gradient Descent (ASGD) method is an automatic approach, but it comes at a high computational cost. In this paper, we propose a new computationally efficient method (fast ASGD) to automatically determine the step size for gradient descent methods, by considering the observed distribution of the voxel displacements between iterations. A relation between the step size and the expectation and variance of the observed distribution is derived. While ASGD has quadratic complexity with respect to the transformation parameters, fast ASGD only has linear complexity. Extensive validation has been performed on different datasets with different modalities, inter/intra subjects, different similarity measures and transformation models. For all experiments, we obtained similar accuracy as ASGD. Moreover, the estimation time of fast ASGD is reduced to a very small value, from 40 s to less than 1 s when the number of parameters is 105, almost 40 times faster. Depending on the registration settings, the total registration time is reduced by a factor of 2.5-7 × for the experiments in this paper. PMID:26353367

  12. Selecting the optimal anti-aliasing filter for multichannel biosignal acquisition intended for inter-signal phase shift analysis.

    PubMed

    Keresnyei, Róbert; Megyeri, Péter; Zidarics, Zoltán; Hejjel, László

    2015-01-01

    The availability of microcomputer-based portable devices facilitates the high-volume multichannel biosignal acquisition and the analysis of their instantaneous oscillations and inter-signal temporal correlations. These new, non-invasively obtained parameters can have considerable prognostic or diagnostic roles. The present study investigates the inherent signal delay of the obligatory anti-aliasing filters. One cycle of each of the 8 electrocardiogram (ECG) and 4 photoplethysmogram signals from healthy volunteers or artificially synthesised series were passed through 100-80-60-40-20 Hz 2-4-6-8th order Bessel and Butterworth filters digitally synthesized by bilinear transformation, that resulted in a negligible error in signal delay compared to the mathematical model of the impulse- and step responses of the filters. The investigated filters have as diverse a signal delay as 2-46 ms depending on the filter parameters and the signal slew rate, which is difficult to predict in biological systems and thus difficult to compensate for. Its magnitude can be comparable to the examined phase shifts, deteriorating the accuracy of the measurement. As a conclusion, identical or very similar anti-aliasing filters with lower orders and higher corner frequencies, oversampling, and digital low pass filtering are recommended for biosignal acquisition intended for inter-signal phase shift analysis. PMID:25514627

  13. Optimal spatial filtering for design of a conformal velocity sonar array

    NASA Astrophysics Data System (ADS)

    Traweek, Charles M.

    In stark contrast to the ubiquitous optimization problem posed in the array processing literature, tactical hull sonar arrays have traditionally been designed using extrapolations of low spatial resolution empirical self noise data, dominated by hull noise at moderate speeds, in conjunction with assumptions regarding achievable conventional beamformer sidelobe levels by so-called Taylor shading for a time domain, delay-and-sum beamformer. That ad hoc process defaults to an extremely conservative (expensive and heavy) design for an array baffle as a means to assure environmental noise limited sonar performance. As an alternative, this dissertation formulates, implements, and demonstrates an objective function that results from the expression of the log likelihood ratio of the optimal Bayesian detector as a comparison to a threshold. Its purpose is to maximize the deflection coefficient of a square-law energy detector over an arbitrarily specified frequency band by appropriate selection of array shading weights for the generalized conformal velocity sonar array under the assumption that it will employ the traditional time domain delay-and-sum beamformer. The restrictive assumptions that must be met in order to appropriately use the deflection coefficient as a performance metric are carefully delineated. A series of conformal velocity sonar array spatial filter optimization problems was defined using a data set characterized by spatially complex structural noise from a large aperture conformal velocity sonar array experiment. The detection performance of an 80-element cylindrical array was optimized over a reasonably broad range of frequencies (from k0a = 12.95 to k 0a = 15.56) for the cases of broadside and off-broadside signal incidence. In each case, performance of the array using optimal real-valued time domain delay-and-sum beamformer weights was much better than that achieved for either uniform shading or for Taylor shading. The result is an analytical engine

  14. Filtering for networked control systems with single/multiple measurement packets subject to multiple-step measurement delays and multiple packet dropouts

    NASA Astrophysics Data System (ADS)

    Moayedi, Maryam; Foo, Yung Kuan; Chai Soh, Yeng

    2011-03-01

    The minimum-variance filtering problem in networked control systems, where both random measurement transmission delays and packet dropouts may occur, is investigated in this article. Instead of following the many existing results that solve the problem by using probabilistic approaches based on the probabilities of the uncertainties occurring between the sensor and the filter, we propose a non-probabilistic approach by time-stamping the measurement packets. Both single-measurement and multiple measurement packets are studied. We also consider the case of burst arrivals, where more than one packet may arrive between the receiver's previous and current sampling times; the scenario where the control input is non-zero and subject to delays and packet dropouts is examined as well. It is shown that, in such a situation, the optimal state estimate would generally be dependent on the possible control input. Simulations are presented to demonstrate the performance of the various proposed filters.

  15. A Compact Symmetric Microstrip Filter Based on a Rectangular Meandered-Line Stepped Impedance Resonator with a Triple-Band Bandstop Response

    PubMed Central

    Kim, Nam-Young

    2013-01-01

    This paper presents a symmetric-type microstrip triple-band bandstop filter incorporating a tri-section meandered-line stepped impedance resonator (SIR). The length of each section of the meandered line is 0.16, 0.15, and 0.83 times the guided wavelength (λg), so that the filter features three stop bands at 2.59 GHz, 6.88 GHz, and 10.67 GHz, respectively. Two symmetric SIRs are employed with a microstrip transmission line to obtain wide bandwidths of 1.12, 1.34, and 0.89 GHz at the corresponding stop bands. Furthermore, an equivalent circuit model of the proposed filter is developed, and the model matches the electromagnetic simulations well. The return losses of the fabricated filter are measured to be −29.90 dB, −28.29 dB, and −26.66 dB while the insertion losses are 0.40 dB, 0.90 dB, and 1.10 dB at the respective stop bands. A drastic reduction in the size of the filter was achieved by using a simplified architecture based on a meandered-line SIR. PMID:24319367

  16. Compressive Bilateral Filtering.

    PubMed

    Sugimoto, Kenjiro; Kamata, Sei-Ichiro

    2015-11-01

    This paper presents an efficient constant-time bilateral filter that produces a near-optimal performance tradeoff between approximate accuracy and computational complexity without any complicated parameter adjustment, called a compressive bilateral filter (CBLF). The constant-time means that the computational complexity is independent of its filter window size. Although many existing constant-time bilateral filters have been proposed step-by-step to pursue a more efficient performance tradeoff, they have less focused on the optimal tradeoff for their own frameworks. It is important to discuss this question, because it can reveal whether or not a constant-time algorithm still has plenty room for improvements of performance tradeoff. This paper tackles the question from a viewpoint of compressibility and highlights the fact that state-of-the-art algorithms have not yet touched the optimal tradeoff. The CBLF achieves a near-optimal performance tradeoff by two key ideas: 1) an approximate Gaussian range kernel through Fourier analysis and 2) a period length optimization. Experiments demonstrate that the CBLF significantly outperforms state-of-the-art algorithms in terms of approximate accuracy, computational complexity, and usability. PMID:26068315

  17. SVD-based optimal filtering for noise reduction in dual microphone hearing aids: a real time implementation and perceptual evaluation.

    PubMed

    Maj, Jean-Baptiste; Royackers, Liesbeth; Moonen, Marc; Wouters, Jan

    2005-09-01

    In this paper, the first real-time implementation and perceptual evaluation of a singular value decomposition (SVD)-based optimal filtering technique for noise reduction in a dual microphone behind-the-ear (BTE) hearing aid is presented. This evaluation was carried out for a speech weighted noise and multitalker babble, for single and multiple jammer sound source scenarios. Two basic microphone configurations in the hearing aid were used. The SVD-based optimal filtering technique was compared against an adaptive beamformer, which is known to give significant improvements in speech intelligibility in noisy environment. The optimal filtering technique works without assumptions about a speaker position, unlike the two-stage adaptive beamformer. However this strategy needs a robust voice activity detector (VAD). A method to improve the performance of the VAD was presented and evaluated physically. By connecting the VAD to the output of the noise reduction algorithms, a good discrimination between the speech-and-noise periods and the noise-only periods of the signals was obtained. The perceptual experiments demonstrated that the SVD-based optimal filtering technique could perform as well as the adaptive beamformer in a single noise source scenario, i.e., the ideal scenario for the latter technique, and could outperform the adaptive beamformer in multiple noise source scenarios. PMID:16189969

  18. Optimization of leaf margins for lung stereotactic body radiotherapy using a flattening filter-free beam

    SciTech Connect

    Wakai, Nobuhide; Sumida, Iori; Otani, Yuki; Suzuki, Osamu; Seo, Yuji; Isohashi, Fumiaki; Yoshioka, Yasuo; Ogawa, Kazuhiko; Hasegawa, Masatoshi

    2015-05-15

    Purpose: The authors sought to determine the optimal collimator leaf margins which minimize normal tissue dose while achieving high conformity and to evaluate differences between the use of a flattening filter-free (FFF) beam and a flattening-filtered (FF) beam. Methods: Sixteen lung cancer patients scheduled for stereotactic body radiotherapy underwent treatment planning for a 7 MV FFF and a 6 MV FF beams to the planning target volume (PTV) with a range of leaf margins (−3 to 3 mm). Forty grays per four fractions were prescribed as a PTV D95. For PTV, the heterogeneity index (HI), conformity index, modified gradient index (GI), defined as the 50% isodose volume divided by target volume, maximum dose (Dmax), and mean dose (Dmean) were calculated. Mean lung dose (MLD), V20 Gy, and V5 Gy for the lung (defined as the volumes of lung receiving at least 20 and 5 Gy), mean heart dose, and Dmax to the spinal cord were measured as doses to organs at risk (OARs). Paired t-tests were used for statistical analysis. Results: HI was inversely related to changes in leaf margin. Conformity index and modified GI initially decreased as leaf margin width increased. After reaching a minimum, the two values then increased as leaf margin increased (“V” shape). The optimal leaf margins for conformity index and modified GI were −1.1 ± 0.3 mm (mean ± 1 SD) and −0.2 ± 0.9 mm, respectively, for 7 MV FFF compared to −1.0 ± 0.4 and −0.3 ± 0.9 mm, respectively, for 6 MV FF. Dmax and Dmean for 7 MV FFF were higher than those for 6 MV FF by 3.6% and 1.7%, respectively. There was a positive correlation between the ratios of HI, Dmax, and Dmean for 7 MV FFF to those for 6 MV FF and PTV size (R = 0.767, 0.809, and 0.643, respectively). The differences in MLD, V20 Gy, and V5 Gy for lung between FFF and FF beams were negligible. The optimal leaf margins for MLD, V20 Gy, and V5 Gy for lung were −0.9 ± 0.6, −1.1 ± 0.8, and −2.1 ± 1.2 mm, respectively, for 7 MV FFF compared

  19. Geometric optimization of a step bearing for a hydrodynamically levitated centrifugal blood pump for the reduction of hemolysis.

    PubMed

    Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2013-09-01

    A hydrodynamically levitated centrifugal blood pump with a semi-open impeller has been developed for mechanical circulatory assistance. However, a narrow bearing gap has the potential to cause hemolysis. The purpose of the present study is to optimize the geometric configuration of the hydrodynamic step bearing in order to reduce hemolysis by expansion of the bearing gap. First, a numerical analysis of the step bearing, based on lubrication theory, was performed to determine the optimal design. Second, in order to assess the accuracy of the numerical analysis, the hydrodynamic forces calculated in the numerical analysis were compared with those obtained in an actual measurement test using impellers having step lengths of 0%, 33%, and 67% of the vane length. Finally, a bearing gap measurement test and a hemolysis test were performed. As a result, the numerical analysis revealed that the hydrodynamic force was the largest when the step length was approximately 70%. The hydrodynamic force calculated in the numerical analysis was approximately equivalent to that obtained in the measurement test. In the measurement test and the hemolysis test, the blood pump having a step length of 67% achieved the maximum bearing gap and reduced hemolysis, as compared with the pumps having step lengths of 0% and 33%. It was confirmed that the numerical analysis of the step bearing was effective, and the developed blood pump having a step length of approximately 70% was found to be a suitable configuration for the reduction of hemolysis. PMID:23834855

  20. Numerical simulation of an industrial microwave assisted filter dryer: criticality assessment and optimization.

    PubMed

    Leonelli, Cristina; Veronesi, Paolo; Grisoni, Fabio

    2007-01-01

    Industrial-scale filter dryers, equipped with one or more microwave input ports, have been modelled with the aim of detecting existing criticalities, proposing possible solutions and optimizing the overall system efficiency and treatment homogeneity. Three different loading conditions have been simulated, namely the empty applicator, the applicator partially loaded by both a high-loss and low loss load whose dielectric properties correspond to the one measured on real products. Modeling results allowed for the implementation of improvements to the original design such as the insertion of a wave guide transition and a properly designed pressure window, modification of the microwave inlet's position and orientation, alteration of the nozzles' geometry and distribution, and changing of the cleaning metallic torus dimensions and position. Experimental testing on representative loads, as well as in production sites, allowed for the confirmation of the validity of the implemented improvements, thus showing how numerical simulation can assist the designer in removing critical features and improving equipment performances when moving from conventional heating to hybrid microwave-assisted processing. PMID:18350999

  1. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  2. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  3. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  4. Removing spurious signals from glow curves using an optimal Wiener filter.

    PubMed

    van Dijk, J W E; Stadtmann, H; Grimbergen, T W M

    2011-03-01

    During readout, the signal of the TLD is occasionally polluted with spurious signals. These most often take the shape of a spike on the glow curve. Often these spikes are only a few milliseconds wide but can have a height that significantly influences the outcome of the dose evaluation. The detection of spikes relies generally on comparing the raw glow curve with a smoothed version of it. A spike is detected when the height of the glow curve exceeds that of the smoothed curve, using criteria based on the absolute and relative differences. The procedure proposed is based on smoothing by an optimal Wiener filter, which is, on its turn, based on Fourier analysis for which numerically very efficient methods are available. Apart from having easy to understand tuning parameters, an attractive bonus is that, with only little additional computational effort, estimates of the position of peak maxima are found from second and third derivatives: a useful feature for glow curve quality control. PMID:21450703

  5. Selection of plants for optimization of vegetative filter strips treating runoff from turfgrass.

    PubMed

    Smith, Katy E; Putnam, Raymond A; Phaneuf, Clifford; Lanza, Guy R; Dhankher, Om P; Clark, John M

    2008-01-01

    Runoff from turf environments, such as golf courses, is of increasing concern due to the associated chemical contamination of lakes, reservoirs, rivers, and ground water. Pesticide runoff due to fungicides, herbicides, and insecticides used to maintain golf courses in acceptable playing condition is a particular concern. One possible approach to mitigate such contamination is through the implementation of effective vegetative filter strips (VFS) on golf courses and other recreational turf environments. The objective of the current study was to screen ten aesthetically acceptable plant species for their ability to remove four commonly-used and degradable pesticides: chlorpyrifos (CP), chlorothalonil (CT), pendimethalin (PE), and propiconazole (PR) from soil in a greenhouse setting, thus providing invaluable information as to the species composition that would be most efficacious for use in VFS surrounding turf environments. Our results revealed that blue flag iris (Iris versicolor) (76% CP, 94% CT, 48% PE, and 33% PR were lost from soil after 3 mo of plant growth), eastern gama grass (Tripsacum dactyloides) (47% CP, 95% CT, 17% PE, and 22% PR were lost from soil after 3 mo of plant growth), and big blue stem (Andropogon gerardii) (52% CP, 91% CT, 19% PE, and 30% PR were lost from soil after 3 mo of plant growth) were excellent candidates for the optimization of VFS as buffer zones abutting turf environments. Blue flag iris was most effective at removing selected pesticides from soil and had the highest aesthetic value of the plants tested. PMID:18689747

  6. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    NASA Astrophysics Data System (ADS)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  7. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  8. Filtering of Defects in Semipolar (11−22) GaN Using 2-Steps Lateral Epitaxial Overgrowth

    PubMed Central

    2010-01-01

    Good-quality (11−22) semipolar GaN sample was obtained using epitaxial lateral overgrowth. The growth conditions were chosen to enhance the growth rate along the [0001] inclined direction. Thus, the coalescence boundaries stop the propagation of basal stacking faults. The faults filtering and the improvement of the crystalline quality were attested by transmission electron microscopy and low temperature photoluminescence. The temperature dependence of the luminescence polarization under normal incidence was also studied. PMID:21170140

  9. Optimization of synthesis and peptization steps to obtain iron oxide nanoparticles with high energy dissipation rates

    NASA Astrophysics Data System (ADS)

    Mérida, Fernando; Chiu-Lam, Andreina; Bohórquez, Ana C.; Maldonado-Camargo, Lorena; Pérez, María-Eglée; Pericchi, Luis; Torres-Lugo, Madeline; Rinaldi, Carlos

    2015-11-01

    Magnetic Fluid Hyperthermia (MFH) uses heat generated by magnetic nanoparticles exposed to alternating magnetic fields to cause a temperature increase in tumors to the hyperthermia range (43-47 °C), inducing apoptotic cancer cell death. As with all cancer nanomedicines, one of the most significant challenges with MFH is achieving high nanoparticle accumulation at the tumor site. This motivates development of synthesis strategies that maximize the rate of energy dissipation of iron oxide magnetic nanoparticles, preferable due to their intrinsic biocompatibility. This has led to development of synthesis strategies that, although attractive from the point of view of chemical elegance, may not be suitable for scale-up to quantities necessary for clinical use. On the other hand, to date the aqueous co-precipitation synthesis, which readily yields gram quantities of nanoparticles, has only been reported to yield sufficiently high specific absorption rates after laborious size selective fractionation. This work focuses on improvements to the aqueous co-precipitation of iron oxide nanoparticles to increase the specific absorption rate (SAR), by optimizing synthesis conditions and the subsequent peptization step. Heating efficiencies up to 1048 W/gFe (36.5 kA/m, 341 kHz; ILP=2.3 nH m2 kg-1) were obtained, which represent one of the highest values reported for iron oxide particles synthesized by co-precipitation without size-selective fractionation. Furthermore, particles reached SAR values of up to 719 W/gFe (36.5 kA/m, 341 kHz; ILP=1.6 nH m2 kg-1) when in a solid matrix, demonstrating they were capable of significant rates of energy dissipation even when restricted from physical rotation. Reduction in energy dissipation rate due to immobilization has been identified as an obstacle to clinical translation of MFH. Hence, particles obtained with the conditions reported here have great potential for application in nanoscale thermal cancer therapy.

  10. Toward an Optimal Position for IVC Filters: Computational Modeling of the Impact of Renal Vein Inflow

    SciTech Connect

    Wang, S L; Singer, M A

    2009-07-13

    The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.

  11. Optimizing the anode-filter combination in the sense of image quality and average glandular dose in digital mammography

    NASA Astrophysics Data System (ADS)

    Varjonen, Mari; Strömmer, Pekka

    2008-03-01

    This paper presents the optimized image quality and average glandular dose in digital mammography, and provides recommendations concerning anode-filter combinations in digital mammography, which is based on amorphous selenium (a-Se) detector technology. The full field digital mammography (FFDM) system based on a-Se technology, which is also a platform of tomosynthesis prototype, was used in this study. X-ray tube anode-filter combinations, which we studied, were tungsten (W) - rhodium (Rh) and tungsten (W) - silver (Ag). Anatomically adaptable fully automatic exposure control (AAEC) was used. The average glandular doses (AGD) were calculated using a specific program developed by Planmed, which automates the method described by Dance et al. Image quality was evaluated in two different ways: a subjective image quality evaluation, and contrast and noise analysis. By using W-Rh and W-Ag anode-filter combinations can be achieved a significantly lower average glandular dose compared with molybdenum (Mo) - molybdenum (Mo) or Mo-Rh. The average glandular dose reduction was achieved from 25 % to 60 %. In the future, the evaluation will concentrate to study more filter combinations and the effect of higher kV (>35 kV) values, which seems be useful while optimizing the dose in digital mammography.

  12. Nature-inspired optimization of quasicrystalline arrays and all-dielectric optical filters and metamaterials

    NASA Astrophysics Data System (ADS)

    Namin, Frank Farhad A.

    (photonic resonance) and the plasmonic response of the spheres (plasmonic resonance). In particular the couplings between the photonic and plasmonic modes are studied. In periodic arrays this coupling leads to the formation of a so called photonic-plasmonic hybrid mode. The formation of hybrid modes is studied in quasicrystalline arrays. Quasicrystalline structures in essence possess several periodicities which in some cases can lead to the formation of multiple hybrid modes with wider bandwidths. It is also demonstrated that the performance of these arrays can be further enhanced by employing a perturbation method. The second property considered is local field enhancements in quasicrystalline arrays of gold nanospheres. It will be shown that despite a considerably smaller filling factor quasicrystalline arrays generate larger local field enhancements which can be even further enhanced by optimally placing perturbing spheres within the prototiles that comprise the aperiodic arrays. The second thrust of research in this dissertation focuses on designing all-dielectric filters and metamaterial coatings for the optical range. In higher frequencies metals tend to have a high loss and thus they are not suitable for many applications. Hence dielectrics are used for applications in optical frequencies. In particular we focus on designing two types of structures. First a near-perfect optical mirror is designed. The design is based on optimizing a subwavelength periodic dielectric grating to obtain appropriate effective parameters that will satisfy the desired perfect mirror condition. Second, a broadband anti-reflective all-dielectric grating with wide field of view is designed. The second design is based on a new computationally efficient genetic algorithm (GA) optimization method which shapes the sidewalls of the grating based on optimizing the roots of polynomial functions.

  13. Near-Diffraction-Limited Operation of Step-Index Large-Mode-Area Fiber Lasers Via Gain Filtering

    SciTech Connect

    Marciante, J.R.; Roides, R.G.; Shkunov, V.V.; Rockwell, D.A.

    2010-06-04

    We present, for the first time to our knowledge, an explicit experimental comparison of beam quality in conventional and confined-gain multimode fiber lasers. In the conventional fiber laser, beam quality degrades with increasing output power. In the confined-gain fiber laser, the beam quality is good and does not degrade with output power. Gain filtering of higher-order modes in 28 μm diameter core fiber lasers is demonstrated with a beam quality of M^2 = 1.3 at all pumping levels. Theoretical modeling is shown to agree well with experimentally observed trends.

  14. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: Spectral optimization and preliminary phantom measurement

    SciTech Connect

    Saito, Masatoshi

    2007-11-15

    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.

  15. Focusing time harmonic scalar fields in non-homogenous lossy media: Inverse filter vs. constrained power focusing optimization

    NASA Astrophysics Data System (ADS)

    Iero, D. A. M.; Isernia, T.; Crocco, L.

    2013-08-01

    Two strategies to focus time harmonic scalar fields in known inhomogeneous lossy media are compared. The first one is the Inverse Filter (IF) method, which faces the focusing task as the synthesis of a nominal field. The second one is the Constrained Power Focusing Optimization (CPFO) method, which tackles the problem in terms of constrained mask constrained power optimization. Numerical examples representative of focusing in noninvasive microwave hyperthermia are provided to show that CPFO is able to outperform IF, thanks to the additional degrees of freedom arising from the adopted power synthesis formulation.

  16. Optimal Scaling of Filtered GRACE dS/dt Anomalies over Sacramento and San Joaquin River Basins, California

    NASA Astrophysics Data System (ADS)

    Ukasha, M.; Ramirez, J. A.

    2014-12-01

    Signals from Gravity Recovery and Climate Experiments (GRACE) twin satellites mission mapping the time invariant earth's gravity field are degraded due to measurement and leakage errors. Dampening of these errors using different filters results in a modification of the true geophysical signals. Therefore, use of a scale factor is suggested to recover the modified signals. For basin averaged dS/dt anomalies computed from data available at University of Colorado GRACE data analysis website - http://geoid.colorado.edu/grace/, optimal time invariant and time variant scale factors for Sacramento and San Joaquin river basins, California, are derived using observed precipitation (P), runoff (Q) and evapotranspiration (ET). Using the derived optimal scaling factor for GRACE data filtered using a 300 km- wide gaussian filter resulted in scaled GRACE dS/dt anomalies that match better with observed dS/dt anomalies (P-ET-Q) as compared to the GRACE dS/dt anomalies computed from scaled GRACE product at University of Colorado GRACE data analysis website. This paper will present the procedure, the optimal values, and the statistical analysis of the results.

  17. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition. PMID:26681183

  18. Optimal filter design for shielded and unshielded ambient noise reduction in fetal magnetocardiography

    NASA Astrophysics Data System (ADS)

    Comani, S.; Mantini, D.; Alleva, G.; Di Luzio, S.; Romani, G. L.

    2005-12-01

    The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz.

  19. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  20. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  1. Optimization of the filter parameters in (99m)Tc myocardial perfusion SPECT studies: the formulation of flowchart.

    PubMed

    Shibutani, Takayuki; Onoguchi, Masahisa; Yamada, Tomoki; Kamida, Hiroki; Kunishita, Kohei; Hayashi, Yuuki; Nakajima, Tadashi; Kinuya, Seigo

    2016-06-01

    Myocardial perfusion single photon emission computed tomography (SPECT) is typically subject to a variation in image quality due to the use of different acquisition protocols, image reconstruction parameters and image display settings by each institution. One of the principal image reconstruction parameters is the Butterworth filter cut-off frequency, a parameter strongly affecting the quality of myocardial images. The objective of this study was to formulate a flowchart for the determination of the optimal parameters of the Butterworth filter for filtered back projection (FBP), ordered subset expectation maximization (OSEM) and collimator-detector response compensation OSEM (CDR-OSEM) methods using the evaluation system of the myocardial image based on technical grounds phantom. SPECT studies were acquired for seven simulated defects where the average counts of the normal myocardial components of 45° left anterior oblique projections were approximately 10-120 counts/pixel. These SPECT images were then reconstructed by FBP, OSEM and CDR-OSEM methods. Visual and quantitative assessment of short axis images were performed for the defect and normal parts. Finally, we formulated a flowchart indicating the optimal image processing procedure for SPECT images. Correlation between normal myocardial counts and the optimal cut-off frequency could be represented as a regression expression, which had high or medium coefficient of determination. We formulated the flowchart in order to optimize the image reconstruction parameters based on a comprehensive assessment, which enabled us to perform objectively processing. Furthermore, the usefulness of image reconstruction using the flowchart was demonstrated by a clinical case. PMID:27052439

  2. Single-channel noise reduction using unified joint diagonalization and optimal filtering

    NASA Astrophysics Data System (ADS)

    Nørholm, Sidsel Marie; Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-12-01

    In this paper, the important problem of single-channel noise reduction is treated from a new perspective. The problem is posed as a filtering problem based on joint diagonalization of the covariance matrices of the desired and noise signals. More specifically, the eigenvectors from the joint diagonalization corresponding to the least significant eigenvalues are used to form a filter, which effectively estimates the noise when applied to the observed signal. This estimate is then subtracted from the observed signal to form an estimate of the desired signal, i.e., the speech signal. In doing this, we consider two cases, where, respectively, no distortion and distortion are incurred on the desired signal. The former can be achieved when the covariance matrix of the desired signal is rank deficient, which is the case, for example, for voiced speech. In the latter case, the covariance matrix of the desired signal is full rank, as is the case, for example, in unvoiced speech. Here, the amount of distortion incurred is controlled via a simple, integer parameter, and the more distortion allowed, the higher the output signal-to-noise ratio (SNR). Simulations demonstrate the properties of the two solutions. In the distortionless case, the proposed filter achieves only a slightly worse output SNR, compared to the Wiener filter, along with no signal distortion. Moreover, when distortion is allowed, it is possible to achieve higher output SNRs compared to the Wiener filter. Alternatively, when a lower output SNR is accepted, a filter with less signal distortion than the Wiener filter can be constructed.

  3. Fiber Bragg grating based notch filter for bit-rate-transparent NRZ to PRZ format conversion with two-degree-of-freedom optimization

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Shu, Xuewen; Atai, Javid; Zuo, Jun; Xiong, Bangyun; Shen, Fangcheng; Liu, xin; Cheng, Jianqun

    2015-02-01

    We propose a novel notch-filtering scheme for bit-rate transparent all-optical NRZ-to-PRZ format conversion. The scheme is based on a two-degree-of-freedom optimally designed fiber Bragg grating. It is shown that a notch filter optimized for any specific operating bit rate can be used to realize high-Q-factor format conversion over a wide bit rate range without requiring any tuning.

  4. Research on improved mechanism for particle filter

    NASA Astrophysics Data System (ADS)

    Yu, Jinxia; Xu, Jingmin; Tang, Yongli; Zhao, Qian

    2013-03-01

    Based on the analysis of particle filter algorithm, two improved mechanism are studied so as to improve the performance of particle filter. Firstly, hybrid proposal distribution with annealing parameter is studied in order to use current information of the latest observed measurement to optimize particle filter. Then, resampling step in particle filter is improved by two methods which are based on partial stratified resampling (PSR). One is that it uses the optimal idea to improve the weights after implementing PSR, and the other is that it uses the optimal idea to improve the weights before implementing PSR and uses adaptive mutation operation for all particles so as to assure the diversity of particle sets after PSR. At last, the simulations based on single object tracking are implemented, and the performances of the improved mechanism for particle filter are estimated.

  5. SU-E-I-57: Evaluation and Optimization of Effective-Dose Using Different Beam-Hardening Filters in Clinical Pediatric Shunt CT Protocol

    SciTech Connect

    Gill, K; Aldoohan, S; Collier, J

    2014-06-01

    Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measure CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.

  6. Design and optimization of fundamental mode filters based on long-period fiber gratings

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Yang; Wei, Jin; Sheng, Yong; Ren, Nai-Fei

    2016-07-01

    A segment of long-period fiber grating (LPFG) that can selectively filter the fundamental mode in the few-mode optical fiber is proposed. By applying an appropriate chosen surrounding material and an apodized configuration of LPFG, high fundamental mode loss and low high-order core mode loss can be achieved simultaneously. In addition, we propose a method of cascading LPFGs with different periods to expand the bandwidth of the mode filter. Numerical simulation shows that the operating bandwidth of the cascade structure can be as large as 23 nm even if the refractive index of the surrounding liquid varies with the environment temperature.

  7. Numerical experiment optimization to obtain the characteristics of the centrifugal pump steps package

    NASA Astrophysics Data System (ADS)

    Boldyrev, S. V.; Boldyrev, A. V.

    2014-12-01

    The numerical simulation method of turbulent flow in a running space of the working-stage in a centrifugal pump using the periodicity conditions has been formulated. The proposed method allows calculating the characteristic indices of one pump step at a lower computing resources cost. The comparison of the pump characteristics' calculation results with pilot data has been conducted.

  8. Optimization of excitation-emission band-pass filter for visualization of viable bacteria distribution on the surface of pork meat.

    PubMed

    Nishino, Ken; Nakamura, Kazuaki; Tsuta, Mizuki; Yoshimura, Masatoshi; Sugiyama, Junichi; Nakauchi, Shigeki

    2013-05-20

    A novel method of optically reducing the dimensionality of an excitation-emission matrix (EEM) by optimizing the excitation and emission band-pass filters was proposed and applied to the visualization of viable bacteria on pork. Filters were designed theoretically using an EEM data set for evaluating colony-forming units on pork samples assuming signal-to-noise ratios of 100, 316, or 1000. These filters were evaluated using newly measured EEM images. The filters designed for S/N = 100 performed the best and allowed the visualization of viable bacteria distributions. The proposed method is expected to be a breakthrough in the application of EEM imaging. PMID:23736477

  9. Optimizing planar lipid bilayer single-channel recordings for high resolution with rapid voltage steps.

    PubMed Central

    Wonderlin, W F; Finkel, A; French, R J

    1990-01-01

    We describe two enhancements of the planar bilayer recording method which enable low-noise recordings of single-channel currents activated by voltage steps in planar bilayers formed on apertures in partitions separating two open chambers. First, we have refined a simple and effective procedure for making small bilayer apertures (25-80 micrograms diam) in plastic cups. These apertures combine the favorable properties of very thin edges, good mechanical strength, and low stray capacitance. In addition to enabling formation of small, low-capacitance bilayers, this aperture design also minimizes the access resistance to the bilayer, thereby improving the low-noise performance. Second, we have used a patch-clamp headstage modified to provide logic-controlled switching between a high-gain (50 G omega) feedback resistor for high-resolution recording and a low-gain (50 M omega) feedback resistor for rapid charging of the bilayer capacitance. The gain is switched from high to low before a voltage step and then back to high gain 25 microseconds after the step. With digital subtraction of the residual currents produced by the gain switching and electrostrictive changes in bilayer capacitance, we can achieve a steady current baseline within 1 ms after the voltage step. These enhancements broaden the range of experimental applications for the planar bilayer method by combining the high resolution previously attained only with small bilayers formed on pipette tips with the flexibility of experimental design possible with planar bilayers in open chambers. We illustrate application of these methods with recordings of the voltage-step activation of a voltage-gated potassium channel. PMID:1698470

  10. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first

  11. Optimization of plasma parameters with magnetic filter field and pressure to maximize H- ion density in a negative hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young; Chung, Kyoung-Jae; Hwang, Y. S.

    2016-02-01

    Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuir probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H- populations for various filter field strengths and pressures. Enhanced H- population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H- sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region.

  12. Optimization of plasma parameters with magnetic filter field and pressure to maximize H⁻ ion density in a negative hydrogen ion source.

    PubMed

    Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young; Chung, Kyoung-Jae; Hwang, Y S

    2016-02-01

    Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuir probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H(-) populations for various filter field strengths and pressures. Enhanced H(-) population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H(-) sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region. PMID:26932018

  13. Optimization of a femtosecond Ti : sapphire amplifier using a acouto-optic programmable dispersive filter and a genetic algorithm.

    SciTech Connect

    Korovyanko, O. J.; Rey-de-Castro, R.; Elles, C. G.; Crowell, R. A.; Li, Y.

    2006-01-01

    The temporal output of a Ti:Sapphire laser system has been optimized using an acousto-optic programmable dispersive filter and a genetic algorithm. In-situ recording the evolution of spectral phase, amplitude and temporal pulse profile for each iteration of the algorithm using SPIDER shows that we are able to lock the spectral phase of the laser pulse within a narrow margin. By using the second harmonic of the CPA laser as feedback for the genetic algorithm, it has been demonstrated that severe mismatch between the compressor and stretcher can be compensated for in a short period of time.

  14. A multiobjective optimization approach for combating Aedes aegypti using chemical and biological alternated step-size control.

    PubMed

    Dias, Weverton O; Wanner, Elizabeth F; Cardoso, Rodrigo T N

    2015-11-01

    Dengue epidemics, one of the most important viral disease worldwide, can be prevented by combating the transmission vector Aedes aegypti. In support of this aim, this article proposes to analyze the Dengue vector control problem in a multiobjective optimization approach, in which the intention is to minimize both social and economic costs, using a dynamic mathematical model representing the mosquitoes' population. It consists in finding optimal alternated step-size control policies combining chemical (via application of insecticides) and biological control (via insertion of sterile males produced by irradiation). All the optimal policies consists in apply insecticides just at the beginning of the season and, then, keep the mosquitoes in an acceptable level spreading into environment a few amount of sterile males. The optimization model analysis is driven by the use of genetic algorithms. Finally, it performs a statistic test showing that the multiobjective approach is effective in achieving the same effect of variations in the cost parameters. Then, using the proposed methodology, it is possible to find, in a single run, given a decision maker, the optimal number of days and the respective amounts in which each control strategy must be applied, according to the tradeoff between using more insecticide with less transmission mosquitoes or more sterile males with more transmission mosquitoes. PMID:26362231

  15. Optimization of 3D laser scanning speed by use of combined variable step

    NASA Astrophysics Data System (ADS)

    Garcia-Cruz, X. M.; Sergiyenko, O. Yu.; Tyrsa, Vera; Rivas-Lopez, M.; Hernandez-Balbuena, D.; Rodriguez-Quiñonez, J. C.; Basaca-Preciado, L. C.; Mercorelli, P.

    2014-03-01

    The problem of 3D TVS slow functioning caused by constant small scanning step becomes its solution in the presented research. It can be achieved by combined scanning step application for the fast search of n obstacles in unknown surroundings. Such a problem is of keynote importance in automatic robot navigation. To maintain a reasonable speed robots must detect dangerous obstacles as soon as possible, but all known scanners able to measure distances with sufficient accuracy are unable to do it in real time. So, the related technical task of the scanning with variable speed and precise digital mapping only for selected spatial sectors is under consideration. A wide range of simulations in MATLAB 7.12.0 of several variants of hypothetic scenes with variable n obstacles in each scene (including variation of shapes and sizes) and scanning with incremented angle value (0.6° up to 15°) is provided. The aim of such simulation was to detect which angular values of interval still permit getting the maximal information about obstacles without undesired time losses. Three of such local maximums were obtained in simulations and then rectified by application of neuronal network formalism (Levenberg-Marquradt Algorithm). The obtained results in its turn were applied to MET (Micro-Electro-mechanical Transmission) design for practical realization of variable combined step scanning on an experimental prototype of our previously known laser scanner.

  16. A COMPARISON OF MODEL BASED AND DIRECT OPTIMIZATION BASED FILTERING ALGORITHMS FOR SHEARWAVE VELOCITY RECONSTRUCTION FOR ELECTRODE VIBRATION ELASTOGRAPHY

    PubMed Central

    Ingle, Atul; Varghese, Tomy

    2014-01-01

    Tissue stiffness estimation plays an important role in cancer detection and treatment. The presence of stiffer regions in healthy tissue can be used as an indicator for the possibility of pathological changes. Electrode vibration elastography involves tracking of a mechanical shear wave in tissue using radio-frequency ultrasound echoes. Based on appropriate assumptions on tissue elasticity, this approach provides a direct way of measuring tissue stiffness from shear wave velocity, and enabling visualization in the form of tissue stiffness maps. In this study, two algorithms for shear wave velocity reconstruction in an electrode vibration setup are presented. The first method models the wave arrival time data using a hidden Markov model whose hidden states are local wave velocities that are estimated using a particle filter implementation. This is compared to a direct optimization-based function fitting approach that uses sequential quadratic programming to estimate the unknown velocities and locations of interfaces. The mean shear wave velocities obtained using the two algorithms are within 10%of each other. Moreover, the Young’s modulus estimates obtained from an incompressibility assumption are within 15 kPa of those obtained from the true stiffness data obtained from mechanical testing. Based on visual inspection of the two filtering algorithms, the particle filtering method produces smoother velocity maps. PMID:25285187

  17. Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-dimensional Bilateral Filter

    SciTech Connect

    Bethel, E. Wes; Bethel, E. Wes

    2012-01-06

    This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.

  18. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  19. Pareto optimality between width of central lobe and peak sidelobe intensity in the far-field pattern of lossless phase-only filters for enhancement of transverse resolution.

    PubMed

    Mukhopadhyay, Somparna; Hazra, Lakshminarayan

    2015-11-01

    Resolution capability of an optical imaging system can be enhanced by reducing the width of the central lobe of the point spread function. Attempts to achieve the same by pupil plane filtering give rise to a concomitant increase in sidelobe intensity. The mutual exclusivity between these two objectives may be considered as a multiobjective optimization problem that does not have a unique solution; rather, a class of trade-off solutions called Pareto optimal solutions may be generated. Pareto fronts in the synthesis of lossless phase-only pupil plane filters to achieve superresolution with prespecified lower limits for the Strehl ratio are explored by using the particle swarm optimization technique. PMID:26560575

  20. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    PubMed

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  1. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    PubMed Central

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  2. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1991-01-01

    Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  3. Rod-filter-field optimization of the J-PARC RF-driven H{sup −} ion source

    SciTech Connect

    Ueno, A. Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2015-04-08

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H{sup −} ion beam of 60mA within normalized emittances of 1.5πmm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500μs×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H{sup −} ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H{sup −} ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM’s gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H{sup −} ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM’s cross-section (magnetmotive force) was indispensable for easy operation with the temperature (T{sub PE}) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM’s cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for T{sub PE} around 60°C.

  4. Optimizing multi-step B-side charge separation in photosynthetic reaction centers from Rhodobacter capsulatus.

    PubMed

    Faries, Kaitlyn M; Kressel, Lucas L; Dylla, Nicholas P; Wander, Marc J; Hanson, Deborah K; Holten, Dewey; Laible, Philip D; Kirmaier, Christine

    2016-02-01

    Using high-throughput methods for mutagenesis, protein isolation and charge-separation functionality, we have assayed 40 Rhodobacter capsulatus reaction center (RC) mutants for their P(+)QB(-) yield (P is a dimer of bacteriochlorophylls and Q is a ubiquinone) as produced using the normally inactive B-side cofactors BB and HB (where B is a bacteriochlorophyll and H is a bacteriopheophytin). Two sets of mutants explore all possible residues at M131 (M polypeptide, native residue Val near HB) in tandem with either a fixed His or a fixed Asn at L181 (L polypeptide, native residue Phe near BB). A third set of mutants explores all possible residues at L181 with a fixed Glu at M131 that can form a hydrogen bond to HB. For each set of mutants, the results of a rapid millisecond screening assay that probes the yield of P(+)QB(-) are compared among that set and to the other mutants reported here or previously. For a subset of eight mutants, the rate constants and yields of the individual B-side electron transfer processes are determined via transient absorption measurements spanning 100 fs to 50 μs. The resulting ranking of mutants for their yield of P(+)QB(-) from ultrafast experiments is in good agreement with that obtained from the millisecond screening assay, further validating the efficient, high-throughput screen for B-side transmembrane charge separation. Results from mutants that individually show progress toward optimization of P(+)HB(-)→P(+)QB(-) electron transfer or initial P*→P(+)HB(-) conversion highlight unmet challenges of optimizing both processes simultaneously. PMID:26658355

  5. Optimization of isopropanol and ammonium sulfate precipitation steps in the purification of plasmid DNA.

    PubMed

    Freitas, S S; Santos, J A L; Prazeres, D M F

    2006-01-01

    Large-scale processes used to manufacture grams of plasmid DNA (pDNA) should be cGMP compliant, economically feasible, and environmentally friendly. Alcohol and salt precipitation techniques are frequently used in plasmid DNA (pDNA) downstream processing, as concentration and prepurification steps, respectively. This work describes a study of a standard 2-propanol (IsopOH; 0.7 v/v) and ammonium sulfate (AS; 2.5 M) precipitation. When inserted in a full process, this tandem precipitation scheme represents a high economic and environmental impact due to the large amounts of the two precipitant agents and their environmental relevance. Thus, major goals of the study were the minimization of precipitants and the selection of the best operating conditions for high pDNA recovery and purity. The pDNA concentration in the starting Escherichia coli alkaline lysate strongly affected the efficiency of IsopOH precipitation as a concentration step. The results showed that although an IsopOH concentration of at least 0.6 (v/v) was required to maximize recovery when using lysates with less than 80 microg pDNA/mL, concentrations as low as 0.4 v/v could be used with more concentrated lysates (170 microg pDNA/mL). Following resuspension of pDNA pellets generated by 0.6 v/v IsopOH, precipitation at 4 degrees C with 2.4 M AS consistently resulted in recoveries higher than 80% and in removal of more than 90% of the impurities (essentially RNA). An experimental design further indicated that AS concentrations could be reduced down to 2.0 M, resulting in an acceptable purity (21-23%) without compromising recovery (84-86%). Plasmid recovery and purity after the sequential IsopOH/AS precipitation could be further improved by increasing the concentration factor (CF) upon IsopOH precipitation from 2 up to 25. Under these conditions, IsopOH and AS concentrations of 0.60 v/v and 1.6 M resulted in high recovery (approximately 100%) and purity (32%). In conclusion, it is possible to reduce

  6. Control system optimization studies. Volume 2: High frequency cutoff filter analysis

    NASA Technical Reports Server (NTRS)

    Fong, M. H.

    1972-01-01

    The problem of digital implementation of a cutoff filter is approached with consideration to word length, sampling rate, accuracy requirements, computing time and hardware restrictions. Computing time and hardware requirements for four possible programming forms for the linear portions of the filter are determined. Upper bounds for the steady state system output error due to quantization for digital control systems containing a digital network programmed both in the direct form and in the canonical form are derived. This is accomplished by defining a set of error equations in the z domain and then applying the final value theorem to the solution. Quantization error was found to depend upon the digital word length, sampling rate, and system time constants. The error bound developed may be used to estimate the digital word length and sampling rate required to achieve a given system specification. From the quantization error accumulation, computing time and hardware point of view, and the fact that complex poles and zeros must be realized, the canonical form of programming seems preferable.

  7. Optimization of conditions for the single step IMAC purification of miraculin from Synsepalum dulcificum.

    PubMed

    He, Zuxing; Tan, Joo Shun; Lai, Oi Ming; Ariff, Arbakariya B

    2015-08-15

    In this study, the methods for extraction and purification of miraculin from Synsepalum dulcificum were investigated. For extraction, the effect of different extraction buffers (phosphate buffer saline, Tris-HCl and NaCl) on the extraction efficiency of total protein was evaluated. Immobilized metal ion affinity chromatography (IMAC) with nickel-NTA was used for the purification of the extracted protein, where the influence of binding buffer pH, crude extract pH and imidazole concentration in elution buffer upon the purification performance was explored. The total amount of protein extracted from miracle fruit was found to be 4 times higher using 0.5M NaCl as compared to Tris-HCl and phosphate buffer saline. On the other hand, the use of Tris-HCl as binding buffer gave higher purification performance than sodium phosphate and citrate-phosphate buffers in IMAC system. The optimum purification condition of miraculin using IMAC was achieved with crude extract at pH 7, Tris-HCl binding buffer at pH 7 and the use of 300 mM imidazole as elution buffer, which gave the overall yield of 80.3% and purity of 97.5%. IMAC with nickel-NTA was successfully used as a single step process for the purification of miraculin from crude extract of S. dulcificum. PMID:25794715

  8. Optimized selective lactate excitation with a refocused multiple-quantum filter

    NASA Astrophysics Data System (ADS)

    Holbach, Mirjam; Lambert, Jörg; Johst, Sören; Ladd, Mark E.; Suter, Dieter

    2015-06-01

    Selective detection of lactate signals in in vivo MR spectroscopy with spectral editing techniques is necessary in situations where strong lipid or signals from other molecules overlap the desired lactate resonance in the spectrum. Several pulse sequences have been proposed for this task. The double-quantum filter SSel-MQC provides very good lipid and water signal suppression in a single scan. As a major drawback, it suffers from significant signal loss due to incomplete refocussing in situations where long evolution periods are required. Here we present a refocused version of the SSel-MQC technique that uses only one additional refocussing pulse and regains the full refocused lactate signal at the end of the sequence.

  9. Optimal spatial filtering and transfer function for SAR ocean wave spectra

    NASA Technical Reports Server (NTRS)

    Beal, R. C.; Tilley, D. G.

    1981-01-01

    The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.

  10. Optimal spatial filtering and transfer function for SAR ocean wave spectra

    NASA Astrophysics Data System (ADS)

    Beal, R. C.; Tilley, D. G.

    1981-06-01

    The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.

  11. Medical image processing using novel wavelet filters based on atomic functions: optimal medical image compression.

    PubMed

    Landin, Cristina Juarez; Reyes, Magally Martinez; Martin, Anabelem Soberanes; Rosas, Rosa Maria Valdovinos; Ramirez, Jose Luis Sanchez; Ponomaryov, Volodymyr; Soto, Maria Dolores Torres

    2011-01-01

    The analysis of different Wavelets including novel Wavelet families based on atomic functions are presented, especially for ultrasound (US) and mammography (MG) images compression. This way we are able to determine with what type of filters Wavelet works better in compression of such images. Key properties: Frequency response, approximation order, projection cosine, and Riesz bounds were determined and compared for the classic Wavelets W9/7 used in standard JPEG2000, Daubechies8, Symlet8, as well as for the complex Kravchenko-Rvachev Wavelets ψ(t) based on the atomic functions up(t),  fup (2)(t), and eup(t). The comparison results show significantly better performance of novel Wavelets that is justified by experiments and in study of key properties. PMID:21431590

  12. Reaction null-space filter: extracting reactionless synergies for optimal postural balance from motion capture data.

    PubMed

    Nenchev, D N; Miyamoto, Y; Iribe, H; Takeuchi, K; Sato, D

    2016-06-01

    This paper introduces the notion of a reactionless synergy: a postural variation for a specific motion pattern/strategy, whereby the movements of the segments do not alter the force/moment balance at the feet. Given an optimal initial posture in terms of stability, a reactionless synergy can ensure optimality throughout the entire movement. Reactionless synergies are derived via a dynamical model wherein the feet are regarded to be unfixed. Though in contrast with the conventional fixed-feet models, this approach has the advantage of exhibiting the reactions at the feet explicitly. The dynamical model also facilitates a joint-space decomposition scheme yielding two motion components: the reactionless synergy and an orthogonal complement responsible for the dynamical coupling between the feet and the support. Since the reactionless synergy provides the basis (a feedforward control component) for optimal balance control, it may play an important role when evaluating balance abnormalities or when assessing optimality in balance control. We show how to apply the proposed method for analysis of motion capture data obtained from three voluntary movement patterns in the sagittal plane: squat, sway, and forward bend. PMID:26273732

  13. Reduction of Common-Mode Conducted Noise Emissions in PWM Inverter-fed AC Motor Drive Systems using Optimized Passive EMI Filter

    NASA Astrophysics Data System (ADS)

    Jettanasen, C.; Ngaopitakkul, A.

    2010-10-01

    Conducted electromagnetic interference (EMI) generated by PWM inverter-fed induction motor drive systems, which are currently widely used in many industrial and/or avionic applications, causes severe parasitic current problems, especially at high frequencies (HF). These restrict power electronic drive's evolution. In order to reduce or minimize these EMI problems, several techniques can be applied. In this paper, insertion of an optimized passive EMI filter is proposed. This filter is optimized by taking into account real impedances of each part of a considered AC motor drive system contrarily to commercial EMI filters designed by considering internal impedance of disturbance source and load, equal to 50Ω. Employing the latter EMI filter would make EMI minimization less effective. The proposed EMI filter optimization is mainly dedicated to minimize common mode (CM) currents due to its most dominant effects in this kind of system. The efficiency of the proposed optimization method using two-port network approach is deduced by comparing the minimized CM current spectra to an applied normative level (ex. DO-160D in aeronautics).

  14. An optimized DSP implementation of adaptive filtering and ICA for motion artifact reduction in ambulatory ECG monitoring.

    PubMed

    Berset, Torfinn; Geng, Di; Romero, Iñaki

    2012-01-01

    Noise from motion artifacts is currently one of the main challenges in the field of ambulatory ECG recording. To address this problem, we propose the use of two different approaches. First, an adaptive filter with electrode-skin impedance as a reference signal is described. Secondly, a multi-channel ECG algorithm based on Independent Component Analysis is introduced. Both algorithms have been designed and further optimized for real-time work embedded in a dedicated Digital Signal Processor. We show that both algorithms improve the performance of a beat detection algorithm when applied in high noise conditions. In addition, an efficient way of choosing this methods is suggested with the aim of reduce the overall total system power consumption. PMID:23367417

  15. Shuttle filter study. Volume 1: Characterization and optimization of filtration devices

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A program to develop a new technology base for filtration equipment and comprehensive fluid particulate contamination management techniques was conducted. The study has application to the systems used in the space shuttle and space station projects. The scope of the program is as follows: (1) characterization and optimization of filtration devices, (2) characterization of contaminant generation and contaminant sensitivity at the component level, and (3) development of a comprehensive particulate contamination management plane for space shuttle fluid systems.

  16. Designing spectrum-splitting dichroic filters to optimize current-matched photovoltaics.

    PubMed

    Miles, Alexander; Cocilovo, Byron; Wheelwright, Brian; Pan, Wei; Tweet, Doug; Norwood, Robert A

    2016-03-10

    We have developed an approach for designing a dichroic coating to optimize performance of current-matched multijunction photovoltaic cells while diverting unused light. By matching the spectral responses of the photovoltaic cells and current matching them, substantial improvement to system efficiencies is shown to be possible. A design for use in a concentrating hybrid solar collector was produced by this approach, and is presented. Materials selection, design methodology, and tilt behavior on a curved substrate are discussed. PMID:26974772

  17. Bounds on the performance of particle filters

    NASA Astrophysics Data System (ADS)

    Snyder, C.; Bengtsson, T.

    2014-12-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. A persistent issue with all particle filters is degeneracy of the importance weights, where one or a few particles receive almost all the weight. Considering single-step filters such as the equivalent-weights or implicit particle filters (that is, those in which the particles and weights at time tk depend only on the observations at tk and the particles and weights at tk-1), two results provide a bound on their performance. First, the optimal proposal minimizes the variance of the importance weights not only over draws of the particles at tk, but also over draws from the joint proposal for tk-1 and tk. This shows that a particle filter using the optimal proposal will have minimal degeneracy relative to all other single-step filters. Second, the asymptotic results of Bengtsson et al. (2008) and Snyder et al. (2008) also hold rigorously for the optimal proposal in the case of linear, Gaussian systems. The number of particles necessary to avoid degeneracy must increase exponentially with the variance of the incremental importance weights. In the simplest examples, that variance is proportional to the dimension of the system, though in general it depends on other factors, including the characteristics of the observing network. A rough estimate indicates that single-step particle filter applied to global numerical weather prediction will require very large numbers of particles.

  18. Chaos particle swarm optimization combined with circular median filtering for geophysical parameters retrieval from Windsat

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wang, Zhenzhan; Shi, Hanqing; Long, Zhiyong; Du, Huadong

    2016-08-01

    This paper established a geophysical retrieval algorithm for sea surface wind vector, sea surface temperature, columnar atmospheric water vapor, and columnar cloud liquid water from WindSat, using the measured brightness temperatures and a matchup database. To retrieve the wind vector, a chaotic particle swarm approach was used to determine a set of possible wind vector solutions which minimize the difference between the forward model and the WindSat observations. An adjusted circular median filtering function was adopted to remove wind direction ambiguity. The validation of the wind speed, wind direction, sea surface temperature, columnar atmospheric water vapor, and columnar liquid cloud water indicates that this algorithm is feasible and reasonable and can be used to retrieve these atmospheric and oceanic parameters. Compared with moored buoy data, the RMS errors for wind speed and sea surface temperature were 0.92 m s-1 and 0.88°C, respectively. The RMS errors for columnar atmospheric water vapor and columnar liquid cloud water were 0.62 mm and 0.01 mm, respectively, compared with F17 SSMIS results. In addition, monthly average results indicated that these parameters are in good agreement with AMSR-E results. Wind direction retrieval was studied under various wind speed conditions and validated by comparing to the QuikSCAT measurements, and the RMS error was 13.3°. This paper offers a new approach to the study of ocean wind vector retrieval using a polarimetric microwave radiometer.

  19. Optimized, one-step, recovery-enrichment broth for enhanced detection of Listeria monocytogenes in pasteurized milk and hot dogs.

    PubMed

    Knabel, Stephen J

    2002-01-01

    A one-step, recovery-enrichment broth, optimized Penn State University (oPSU) broth, was developed to consistently detect low levels of injured and uninjured Listeria monocytogenes cells in ready-to-eat foods. The oPSU broth contains special selective agents that inhibit growth of background flora without inhibiting recovery of injured Listeria cells. After recovery in the anaerobic section of oPSU broth, Listeria cells migrated to the surface, forming a black zone. This migration separated viable from nonviable cells and the food matrix, thereby reducing inhibitors that prevent detection by molecular methods. The high Listeria-to-background ratio in the black zone resulted in consistent detection of low levels of L. monocytogenes in pasteurized foods by both cultural and molecular methods, and greatly reduced both false-negative and false-positive results. oPSU broth does not require transfer to a secondary enrichment broth, making it less laborious and less subject to external contamination than 2-step enrichment protocols. Addition of 150mM D-serine prevented germination of Bacillus spores, but not the growth of vegetative cells. Replacement of D-serine with 12 mg/L acriflavin inhibited growth of vegetative cells of Bacillus spp. without inhibiting recovery of injured Listeria cells. oPSU broth may allow consistent detection of low levels of injured and uninjured cells of L. monocytogenes in pasteurized foods containing various background microflora. PMID:11990038

  20. Process optimization of preparation of ZnO-porous carbon composite from spent catalysts using one step activation.

    PubMed

    Jin, Wen; Qu, Wen-Wen; Srinivasakannan, C; Peng, Jin-Hui; Duan, Xin-Hui; Zhang, Shi-Min

    2012-08-01

    The process parameters of one step preparation of ZnO/Activated Carbon (AC) composite materials, from vinyl acetate synthesis spent catalyst were optimized using response surface methodology (RSM) and the central composite rotatable design (CCD). Regeneration temperature, time and flow rate of CO2 were the process variables, while the iodine number and the yield were the response variables. All the three process variables were found to significantly influence the yield of the regenerated carbon, while only the regeneration temperature and CO2 flow rate were found to significantly affect the iodine number. The optimized process conditions that maximize the yield and iodine adsorption capacity were identified to be a regeneration temperature of 950 degrees C, time of 120 min and flow rate of CO2 of 600 ml/min, with the corresponding yield and iodine number to be in excess of 50% and 1100 mg/g. The BET surface area of the regenerated composite was estimated to be 1263 m2/g, with micropore to mesopore ratio of 0.75. The pore volume was found to have increased 6 times as compared to the spent catalyst. The composite material (AC/ZnO) with high surface area and pore volume coupled with high yield augur economic feasibility of the process. EDS and XRD spectrum indicate presence of ZnO in the regenerated samples. PMID:22962730

  1. Metrics for comparing plasma mass filters

    NASA Astrophysics Data System (ADS)

    Fetterman, Abraham J.; Fisch, Nathaniel J.

    2011-10-01

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

  2. Metrics For Comparing Plasma Mass Filters

    SciTech Connect

    Abraham J. Fetterman and Nathaniel J. Fisch

    2012-08-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter. __________________________________________________

  3. Metrics for comparing plasma mass filters

    SciTech Connect

    Fetterman, Abraham J.; Fisch, Nathaniel J.

    2011-10-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

  4. The Use of Daily Geodetic UT1 and LOD Data in the Optimal Estimation of UT1 and LOD With the JPL Kalman Earth Orientation Filter

    NASA Technical Reports Server (NTRS)

    Freedman, A. P.; Steppe, J. A.

    1995-01-01

    The Jet Propulsion Laboratory Kalman Earth Orientation Filter (KEOF) uses several of the Earth rotation data sets available to generate optimally interpolated UT1 and LOD series to support spacecraft navigation. This paper compares use of various data sets within KEOF.

  5. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  6. Development of optimized filter for TARC and developer with the goal of having small pore size and minimizing microbubble reduction

    NASA Astrophysics Data System (ADS)

    Umeda, Toru; Tsuzuki, Shuichi; Boucher, Mikal; Dinh, Hung; Ma, L. C.; Boten, Russell

    2006-03-01

    Microbubble in filtering Tetra Methyl Ammonium Hydroxide (TMAH) were counted to find the filter which generates the lowest microbubble in resist development process. Hydrophilic Highly Asymmetric Poly Aryl Sulfone (HAPAS) filter was developed and tested. The result showed that generation of microbubbles was as low as that of the Nylon 6,6 filter which had the best performance to date. Microbubbles in TARC are counted using the same method as the developer testing described above except for mainstream flow rate and the counter model. The results show that counts in the small channel could be reduced by smaller pore size filter such as conventional 0.02um rated filter. However, counts in the larger channel could be reduced by larger pore size filter such as 0.1um rated filter. Based on the above results, 0.02um rated asymmetric nylon 6,6 filter was developed. As a result, 0.02um rated asymmetric Nylon 6,6 filter achieved relatively lower count at any channel as compared to the standard 0.04um rated Nylon 6,6 filter. Nylon 6,6 filters were installed in resist as an improvement for preventive maintenance (PM) at Wafertech, L.L.C. instead of the currently used filter which has more hydrophobic membrane material. Using the Nylon 6,6 membrane, the number of defects immediately after filter change greatly decreased from 493 pcs of the more hydrophobic filter to 6 pcs/wafer, then after purging with about 250ml, the number of defects reduced within the process specification while the more hydrophobic filter had required 2L purging and 12-36 hours of PM time.

  7. An optimal modeling of multidimensional wave digital filtering network for free vibration analysis of symmetrically laminated composite FSDT plates

    NASA Astrophysics Data System (ADS)

    Tseng, Chien-Hsun

    2015-02-01

    The technique of multidimensional wave digital filtering (MDWDF) that builds on traveling wave formulation of lumped electrical elements, is successfully implemented on the study of dynamic responses of symmetrically laminated composite plate based on the first order shear deformation theory. The philosophy applied for the first time in this laminate mechanics relies on integration of certain principles involving modeling and simulation, circuit theory, and MD digital signal processing to provide a great variety of outstanding features. Especially benefited by the conservation of passivity gives rise to a nonlinear programming problem (NLP) for the issue of numerical stability of a MD discrete system. Adopting the augmented Lagrangian genetic algorithm, an effective optimization technique for rapidly achieving solution spaces of NLP models, numerical stability of the MDWDF network is well received at all time by the satisfaction of the Courant-Friedrichs-Levy stability criterion with the least restriction. In particular, optimum of the NLP has led to the optimality of the network in terms of effectively and accurately predicting the desired fundamental frequency, and thus to give an insight into the robustness of the network by looking at the distribution of system energies. To further explore the application of the optimum network, more numerical examples are engaged in efforts to achieve a qualitative understanding of the behavior of the laminar system. These are carried out by investigating various effects based on different stacking sequences, stiffness and span-to-thickness ratios, mode shapes and boundary conditions. Results are scrupulously validated by cross referencing with early published works, which show that the present method is in excellent agreement with other numerical and analytical methods.

  8. Optimization of two-step bioleaching of spent petroleum refinery catalyst by Acidithiobacillus thiooxidans using response surface methodology.

    PubMed

    Srichandan, Haragobinda; Pathak, Ashish; Kim, Dong Jin; Lee, Seoung-Won

    2014-01-01

    A central composite design (CCD) combined with response surface methodology (RSM) was employed for maximizing bioleaching yields of metals (Al, Mo, Ni, and V) from as-received spent refinery catalyst using Acidithiobacillus thiooxidans. Three independent variables, namely initial pH, sulfur concentration, and pulp density were investigated. The pH was found to be the most influential parameter with leaching yields of metals varying inversely with pH. Analysis of variance (ANOVA) of the quadratic model indicated that the predicted values were in good agreement with experimental data. Under optimized conditions of 1.0% pulp density, 1.5% sulfur and pH 1.5, about 93% Ni, 44% Al, 34% Mo, and 94% V was leached from the spent refinery catalyst. Among all the metals, V had the highest maximum rate of leaching (Vmax) according to the Michaelis-Menten equation. The results of the study suggested that two-step bioleaching is efficient in leaching of metals from spent refinery catalyst. Moreover, the process can be conducted with as received spent refinery catalyst, thus making the process cost effective for large-scale applications. PMID:25320861

  9. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  10. Technical note: Optimization for improved tube-loading efficiency in the dual-energy computed tomography coupled with balanced filter method

    SciTech Connect

    Saito, Masatoshi

    2010-08-15

    Purpose: This article describes the spectral optimization of dual-energy computed tomography using balanced filters (bf-DECT) to reduce the tube loadings and dose by dedicating to the acquisition of electron density information, which is essential for treatment planning in radiotherapy. Methods: For the spectral optimization of bf-DECT, the author calculated the beam-hardening error and air kerma required to achieve a desired noise level in an electron density image of a 50-cm-diameter cylindrical water phantom. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. Results: The optimal combination of tube voltages was 80 kV/140 kV in conjunction with Tb/Hf and Bi/Mo filter pairs; this combination agrees with that obtained in a previous study [M. Saito, ''Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method,'' Med. Phys. 36, 3631-3642 (2009)], although the thicknesses of the filters that yielded a minimum tube output were slightly different from those obtained in the previous study. The resultant tube loading of a low-energy scan of the present bf-DECT significantly decreased from 57.5 to 4.5 times that of a high-energy scan for conventional DECT. Furthermore, the air kerma of bf-DECT could be reduced to less than that of conventional DECT, while obtaining the same figure of merit for the measurement of electron density and effective atomic number. Conclusions: The tube-loading and dose efficiencies of bf-DECT were considerably improved by sacrificing the quality of the noise level in the images of effective atomic number.

  11. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  12. Reducing radiation dose by application of optimized low-energy x-ray filters to K-edge imaging with a photon counting detector.

    PubMed

    Choi, Yu-Na; Lee, Seungwan; Kim, Hee-Joung

    2016-01-21

    K-edge imaging with photon counting x-ray detectors (PCXDs) can improve image quality compared with conventional energy integrating detectors. However, low-energy x-ray photons below the K-edge absorption energy of a target material do not contribute to image formation in the K-edge imaging and are likely to be completely absorbed by an object. In this study, we applied x-ray filters to the K-edge imaging with a PCXD based on cadmium zinc telluride for reducing radiation dose induced by low-energy x-ray photons. We used aluminum (Al) filters with different thicknesses as the low-energy x-ray filters and implemented the iodine K-edge imaging with an energy bin of 34-48 keV at the tube voltages of 50, 70 and 90 kVp. The effects of the low-energy x-ray filters on the K-edge imaging were investigated with respect to signal-difference-to-noise ratio (SDNR), entrance surface air kerma (ESAK) and figure of merit (FOM). The highest value of SDNR was observed in the K-edge imaging with a 2 mm Al filter, and the SDNR decreased as a function of the filter thicknesses. Compared to the K-edge imaging with a 2 mm Al filter, the ESAK was reduced by 66%, 48% and 39% in the K-edge imaging with a 12 mm Al filter for 50 kVp, 70 kVp and 90 kVp, respectively. The FOM values, which took into account the ESAK and SDNR, were maximized for 8, 6 to 8 and 4 mm Al filters at 50 kVp, 70 kVp and 90 kVp, respectively. We concluded that the use of an optimal low-energy filter thickness, which was determined by maximizing the FOM, could significantly reduce radiation dose while maintaining image quality in the K-edge imaging with the PCXD. PMID:26733235

  13. Reducing radiation dose by application of optimized low-energy x-ray filters to K-edge imaging with a photon counting detector

    NASA Astrophysics Data System (ADS)

    Choi, Yu-Na; Lee, Seungwan; Kim, Hee-Joung

    2016-01-01

    K-edge imaging with photon counting x-ray detectors (PCXDs) can improve image quality compared with conventional energy integrating detectors. However, low-energy x-ray photons below the K-edge absorption energy of a target material do not contribute to image formation in the K-edge imaging and are likely to be completely absorbed by an object. In this study, we applied x-ray filters to the K-edge imaging with a PCXD based on cadmium zinc telluride for reducing radiation dose induced by low-energy x-ray photons. We used aluminum (Al) filters with different thicknesses as the low-energy x-ray filters and implemented the iodine K-edge imaging with an energy bin of 34-48 keV at the tube voltages of 50, 70 and 90 kVp. The effects of the low-energy x-ray filters on the K-edge imaging were investigated with respect to signal-difference-to-noise ratio (SDNR), entrance surface air kerma (ESAK) and figure of merit (FOM). The highest value of SDNR was observed in the K-edge imaging with a 2 mm Al filter, and the SDNR decreased as a function of the filter thicknesses. Compared to the K-edge imaging with a 2 mm Al filter, the ESAK was reduced by 66%, 48% and 39% in the K-edge imaging with a 12 mm Al filter for 50 kVp, 70 kVp and 90 kVp, respectively. The FOM values, which took into account the ESAK and SDNR, were maximized for 8, 6 to 8 and 4 mm Al filters at 50 kVp, 70 kVp and 90 kVp, respectively. We concluded that the use of an optimal low-energy filter thickness, which was determined by maximizing the FOM, could significantly reduce radiation dose while maintaining image quality in the K-edge imaging with the PCXD.

  14. Nonlinear Attitude Filtering Methods

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Crassidis, John L.; Cheng, Yang

    2005-01-01

    This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.

  15. Filter and method of fabricating

    DOEpatents

    Janney, Mark A.

    2006-02-14

    A method of making a filter includes the steps of: providing a substrate having a porous surface; applying to the porous surface a coating of dry powder comprising particles to form a filter preform; and heating the filter preform to bind the substrate and the particles together to form a filter.

  16. Optimizing mini-ridge filter thickness to reduce proton treatment times in a spot-scanning synchrotron system

    SciTech Connect

    Courneyea, Lorraine; Beltran, Chris Tseung, Hok Seum Wan Chan; Yu, Juan; Herman, Michael G.

    2014-06-15

    Purpose: Study the contributors to treatment time as a function of Mini-Ridge Filter (MRF) thickness to determine the optimal choice for breath-hold treatment of lung tumors in a synchrotron-based spot-scanning proton machine. Methods: Five different spot-scanning nozzles were simulated in TOPAS: four with MRFs of varying maximal thicknesses (6.15–24.6 mm) and one with no MRF. The MRFs were designed with ridges aligned along orthogonal directions transverse to the beam, with the number of ridges (4–16) increasing with MRF thickness. The material thickness given by these ridges approximately followed a Gaussian distribution. Using these simulations, Monte Carlo data were generated for treatment planning commissioning. For each nozzle, standard and stereotactic (SR) lung phantom treatment plans were created and assessed for delivery time and plan quality. Results: Use of a MRF resulted in a reduction of the number of energy layers needed in treatment plans, decreasing the number of synchrotron spills needed and hence the treatment time. For standard plans, the treatment time per field without a MRF was 67.0 ± 0.1 s, whereas three of the four MRF plans had treatment times of less than 20 s per field; considered sufficiently low for a single breath-hold. For SR plans, the shortest treatment time achieved was 57.7 ± 1.9 s per field, compared to 95.5 ± 0.5 s without a MRF. There were diminishing gains in time reduction as the MRF thickness increased. Dose uniformity of the PTV was comparable across all plans; however, when the plans were normalized to have the same coverage, dose conformality decreased with MRF thickness, as measured by the lung V20%. Conclusions: Single breath-hold treatment times for plans with standard fractionation can be achieved through the use of a MRF, making this a viable option for motion mitigation in lung tumors. For stereotactic plans, while a MRF can reduce treatment times, multiple breath-holds would still be necessary due to the

  17. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  18. [Optimization of one-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology].

    PubMed

    Zhang, Yan-jun; Liu, Li-li; Hu, Jun-hua; Wu, Yun; Chao, En-xiang; Xiao, Wei

    2015-11-01

    First with the qualified rate of granules as the evaluation index, significant influencing factors were firstly screened by Plackett-Burman design. Then, with the qualified rate and moisture content as the evaluation indexes, significant factors that affect one-step pelletization technology were further optimized by Box-Behnken design; experimental data were imitated by multiple regression and second-order polynomial equation; and response surface method was used for predictive analysis of optimal technology. The best conditions were as follows: inlet air temperature of 85 degrees C, sample introduction speed of 33 r x min(-1), density of concrete 1. 10. One-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology was stable and feasible with good predictability, which provided reliable basis for the industrialized production of Biqiu granules. PMID:27097415

  19. Investigation, development, and application of optimal output feedback theory. Volume 3: The relationship between dynamic compensators and observers and Kalman filters

    NASA Technical Reports Server (NTRS)

    Broussard, John R.

    1987-01-01

    Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.

  20. Study on Optimization Method of Quantization Step and the Image Quality Evaluation for Medical Ultrasonic Echo Image Compression by Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Khieovongphachanh, Vimontha; Hamamoto, Kazuhiko; Kondo, Shozo

    In this paper, we investigate optimized quantization method in JPEG2000 application for medical ultrasonic echo images. JPEG2000 has been issued as the new standard for image compression technique, which is based on Wavelet Transform (WT) and JPEG2000 incorporated into DICOM (Digital Imaging and Communications in Medicine). There are two quantization methods. One is the scalar derived quantization (SDQ), which is usually used in standard JPEG2000. The other is the scalar expounded quantization (SEQ), which can be optimized by user. Therefore, this paper is an optimization of quantization step, which is determined by Genetic Algorithm (GA). Then, the results are compared with SDQ and SEQ determined by arithmetic average method. The purpose of this paper is to improve image quality and compression ratio for medical ultrasonic echo images. The image quality is evaluated by objective assessment, PSNR (Peak Signal to Noise Ratio) and subjective assessment is evaluated by ultrasonographers from Tokai University Hospital and Tokai University Hachioji Hospital. The results show that SEQ determined by GA provides better image quality than SDQ and SEQ determined by arithmetic average method. Additionally, three optimization methods of quantization step apply to thin wire target image for analysis of point spread function.

  1. Development of an optimal automatic control law and filter algorithm for steep glideslope capture and glideslope tracking

    NASA Technical Reports Server (NTRS)

    Halyo, N.

    1976-01-01

    A digital automatic control law to capture a steep glideslope and track the glideslope to a specified altitude is developed for the longitudinal/vertical dynamics of a CTOL aircraft using modern estimation and control techniques. The control law uses a constant gain Kalman filter to process guidance information from the microwave landing system, and acceleration from body mounted accelerometer data. The filter outputs navigation data and wind velocity estimates which are used in controlling the aircraft. Results from a digital simulation of the aircraft dynamics and the control law are presented for various wind conditions.

  2. Pixelated filters for spatial imaging

    NASA Astrophysics Data System (ADS)

    Mathieu, Karine; Lequime, Michel; Lumeau, Julien; Abel-Tiberini, Laetitia; Savin De Larclause, Isabelle; Berthon, Jacques

    2015-10-01

    Small satellites are often used by spatial agencies to meet scientific spatial mission requirements. Their payloads are composed of various instruments collecting an increasing amount of data, as well as respecting the growing constraints relative to volume and mass; So small-sized integrated camera have taken a favored place among these instruments. To ensure scene specific color information sensing, pixelated filters seem to be more attractive than filter wheels. The work presented here, in collaboration with Institut Fresnel, deals with the manufacturing of this kind of component, based on thin film technologies and photolithography processes. CCD detectors with a pixel pitch about 30 μm were considered. In the configuration where the matrix filters are positioned the closest to the detector, the matrix filters are composed of 2x2 macro pixels (e.g. 4 filters). These 4 filters have a bandwidth about 40 nm and are respectively centered at 550, 700, 770 and 840 nm with a specific rejection rate defined on the visible spectral range [500 - 900 nm]. After an intense design step, 4 thin-film structures have been elaborated with a maximum thickness of 5 μm. A run of tests has allowed us to choose the optimal micro-structuration parameters. The 100x100 matrix filters prototypes have been successfully manufactured with lift-off and ion assisted deposition processes. High spatial and spectral characterization, with a dedicated metrology bench, showed that initial specifications and simulations were globally met. These excellent performances knock down the technological barriers for high-end integrated specific multi spectral imaging.

  3. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  4. Optimal cut-off points of fasting plasma glucose for two-step strategy in estimating prevalence and screening undiagnosed diabetes and pre-diabetes in Harbin, China.

    PubMed

    Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  5. [Reduction of livestock-associated methicillin-resistant staphylococcus aureus (LA-MRSA) in the exhaust air of two piggeries by a bio-trickling filter and a biological three-step air cleaning system].

    PubMed

    Clauss, Marcus; Schulz, Jochen; Stratmann-Selke, Janin; Decius, Maja; Hartung, Jörg

    2013-01-01

    "Livestock-associated" Methicillin-resistent Staphylococcus aureus (LA-MRSA) are frequently found in the air of piggeries, are emitted into the ambient air of the piggeries and may also drift into residential areas or surrounding animal husbandries.. In order to reduce emissions from animal houses such as odour, gases and dust different biological air cleaning systems are commercially available. In this study the retention efficiencies for the culturable LA-MRSA of a bio-trickling filter and a combined three step system, both installed at two different piggeries, were investigated. Raw gas concentrations for LA-MRSA of 2.1 x 10(2) cfu/m3 (biotrickling filter) and 3.9 x 10(2) cfu/m3 (three step system) were found. The clean gas concentrations were in each case approximately one power of ten lower. Both systems were able to reduce the number of investigated bacteria in the air of piggeries on average about 90%. The investigated systems can contribute to protect nearby residents. However, considerable fluctuations of the emissions can occur. PMID:23540196

  6. Estimating model parameters for an impact-produced shock-wave simulation: Optimal use of partial data with the extended Kalman filter

    SciTech Connect

    Kao, Jim . E-mail: kao@lanl.gov; Flicker, Dawn; Ide, Kayo; Ghil, Michael

    2006-05-20

    This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from a single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.

  7. Filter construction and design.

    PubMed

    Jornitz, Maik W

    2006-01-01

    Sterilizing and pre-filters are manufactured in different formats and designs. The criteria for the specific designs are set by the application and the specifications of the filter user. The optimal filter unit or even system requires evaluation, such as flow rate, throughput, unspecific adsorption, steam sterilizability and chemical compatibility. These parameters are commonly tested within a qualification phase, which ensures that an optimal filter design and combination finds its use. If such design investigations are neglected it could be costly in the process scale. PMID:16570863

  8. Evaluation and optimization of a reusable hollow fiber ultrafilter as a first step in concentrating Cryptosporidium parvum oocysts from water.

    PubMed

    Kuhn, R C; Oshima, K H

    2001-08-01

    Experiments with a small-scale hollow fiber ultrafiltration system (50,000 MWCO) was used to characterize the filtration process and identify conditions that optimize the recovery of Cryptosporidium parvum oocysts from 2 L samples of water. Seeded experiments were conducted using deionized water as well as four environmental water sources (tap, ground, Arkansas river, and Rio Grande river; 0-30.9NTU). Optimal and consistent recovery of spiked oocysts was observed (68-81%), when the membrane was sanitized with a 10% sodium dodecyl sulfate (SDS) solution and then blocked with 5% fetal bovine serum (FBS). PMID:11456179

  9. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    SciTech Connect

    Omelyan, Igor E-mail: omelyan@icmp.lviv.ua; Kovalenko, Andriy

    2013-12-28

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  10. Using adaptive genetic algorithms in the design of morphological filters in textural image processing

    NASA Astrophysics Data System (ADS)

    Li, Wei; Haese-Coat, Veronique; Ronsin, Joseph

    1996-03-01

    An adaptive GA scheme is adopted for the optimal morphological filter design problem. The adaptive crossover and mutation rate which make the GA avoid premature and at the same time assure convergence of the program are successfully used in optimal morphological filter design procedure. In the string coding step, each string (chromosome) is composed of a structuring element coding chain concatenated with a filter sequence coding chain. In decoding step, each string is divided into 3 chains which then are decoded respectively into one structuring element with a size inferior to 5 by 5 and two concatenating morphological filter operators. The fitness function in GA is based on the mean-square-error (MSE) criterion. In string selection step, a stochastic tournament procedure is used to replace the simple roulette wheel program in order to accelerate the convergence. The final convergence of our algorithm is reached by a two step converging strategy. In presented applications of noise removal from texture images, it is found that with the optimized morphological filter sequences, the obtained MSE values are smaller than those using corresponding non-adaptive morphological filters, and the optimized shapes and orientations of structuring elements take approximately the same shapes and orientations as those of the image textons.

  11. The University of Arizona College of Medicine Optimal Aging Program: Stepping in the Shadows of Successful Aging

    ERIC Educational Resources Information Center

    Sikora, Stephanie

    2006-01-01

    The Optimal Aging Program (OAP) at the University of Arizona, College of Medicine is a longitudinal mentoring program that pairs students with older adults who are considered to be aging "successfully." This credit-bearing elective was initially established in 2001 through a grant from the John A. Hartford Foundation, and aims to expand the…

  12. Optimization, physicochemical characterization and in vivo assessment of spray dried emulsion: A step toward bioavailability augmentation and gastric toxicity minimization.

    PubMed

    Mehanna, Mohammed M; Alwattar, Jana K; Elmaradny, Hoda A

    2015-12-30

    The limited solubility of BCS class II drugs diminishes their dissolution and thus reduces their bioavailability. Our aim in this study was to develop and optimize a spray dried emulsion containing indomethacin as a model for Class II drugs, Labrasol®/Transuctol® mixture as the oily phase, and maltodextrin as a solid carrier. The optimization was carried out using a 2(3) full factorial design based on two independent variables, the percentage of carrier and concentration of Poloxamer® 188. The effect of the studied parameters on the spray dried yield, loading efficiency and in vitro release were thoroughly investigated. Furthermore, physicochemical characterization of the optimized formulation was performed. In vivo bioavailability, ulcerogenic capability and histopathological features were assessed. The results obtained pointed out that poloxamer 188 concentration in the formulation was the predominant factor affecting the dissolution release, whereas the drug loading was driven by the carrier concentration added. Moreover, the yield demonstrated a drawback by increasing both independent variables studied. The optimized formulation presented a complete release within two minutes thus suggesting an immediate release pattern as well, the formulation revealed to be uniform spherical particles with an average size of 7.5μm entrapping the drug in its molecular state as demonstrated by the DSC and FTIR studies. The in vivo evaluation, demonstrated a 10-fold enhancement in bioavailability of the optimized formulation, with absence of ulcerogenic side effect compared to the marketed product. The results provided an evidence for the significance of spray dried emulsion as a leading strategy for improving the solubility and enhancing the bioavailability of class II drugs. PMID:26561726

  13. SU-E-T-23: A Novel Two-Step Optimization Scheme for Tandem and Ovoid (T and O) HDR Brachytherapy Treatment for Locally Advanced Cervical Cancer

    SciTech Connect

    Sharma, M; Todor, D; Fields, E

    2014-06-01

    Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 4–5 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with “excellent” CTV coverage, average D90 of 96% (range 91–102), sigmoid D2cc was reduced on average by 37% (range 16–73), bladder by 28% (range 20–47) and rectum by 27% (range 15–45). Similar reductions were obtained on plans with “good” coverage, with an average D90 of 93% (range 90–99). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 4–5 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.

  14. Unconditionally energy stable time stepping scheme for Cahn-Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    NASA Astrophysics Data System (ADS)

    Tavakoli, Rouhollah

    2016-01-01

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn-Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results.

  15. Biological/biomedical accelerator mass spectrometry targets. 1. optimizing the CO2 reduction step using zinc dust.

    PubMed

    Kim, Seung-Hyun; Kelly, Peter B; Clifford, Andrew J

    2008-10-15

    Biological and biomedical applications of accelerator mass spectrometry (AMS) use isotope ratio mass spectrometry to quantify minute amounts of long-lived radioisotopes such as (14)C. AMS target preparation involves first the oxidation of carbon (in sample of interest) to CO 2 and second the reduction of CO 2 to filamentous, fluffy, fuzzy, or firm graphite-like substances that coat a -400-mesh spherical iron powder (-400MSIP) catalyst. Until now, the quality of AMS targets has been variable; consequently, they often failed to produce robust ion currents that are required for reliable, accurate, precise, and high-throughput AMS for biological/biomedical applications. Therefore, we described our optimized method for reduction of CO 2 to high-quality uniform AMS targets whose morphology we visualized using scanning electron microscope pictures. Key features of our optimized method were to reduce CO 2 (from a sample of interest that provided 1 mg of C) using 100 +/- 1.3 mg of Zn dust, 5 +/- 0.4 mg of -400MSIP, and a reduction temperature of 500 degrees C for 3 h. The thermodynamics of our optimized method were more favorable for production of graphite-coated iron powders (GCIP) than those of previous methods. All AMS targets from our optimized method were of 100% GCIP, the graphitization yield exceeded 90%, and delta (13)C was -17.9 +/- 0.3 per thousand. The GCIP reliably produced strong (12)C (-) currents and accurate and precise F m values. The observed F m value for oxalic acid II NIST SRM deviated from its accepted F m value of 1.3407 by only 0.0003 +/- 0.0027 (mean +/- SE, n = 32), limit of detection of (14)C was 0.04 amol, and limit of quantification was 0.07 amol, and a skilled analyst can prepare as many as 270 AMS targets per day. More information on the physical (hardness/color), morphological (SEMs), and structural (FT-IR, Raman, XRD spectra) characteristics of our AMS targets that determine accurate, precise, and high-hroughput AMS measurement are in the

  16. Optimization of pressurized liquid extraction and purification conditions for gas chromatography-mass spectrometry determination of UV filters in sludge.

    PubMed

    Negreira, N; Rodríguez, I; Rubí, E; Cela, R

    2011-01-14

    This work presents an effective sample preparation method for the determination of eight UV filter compounds, belonging to different chemical classes, in freeze-dried sludge samples. Pressurized liquid extraction (PLE) and gas chromatography-mass spectrometry (GC-MS) were selected as extraction and determination techniques, respectively. Normal-phase, reversed-phase and anionic exchange materials were tested as clean-up sorbents to reduce the complexity of raw PLE extracts. Under final working conditions, graphitized carbon (0.5 g) was used as in-cell purification sorbent for the retention of co-extracted pigments. Thereafter, a solid-phase extraction cartridge, containing 0.5 g of primary secondary amine (PSA) bonded silica, was employed for off-line removal of other interferences, mainly fatty acids, overlapping the chromatographic peaks of some UV filters. Extractions were performed with a n-hexane:dichloromethane (80:20, v:v) solution at 75°C, using a single extraction cycle of 5 min at 1500 psi. Flush volume and purge time were set at 100% and 2 min, respectively. Considering 0.5 g of sample and 1 mL as the final volume of the purified extract, the developed method provided recoveries between 73% and 112%, with limits of quantification (LOQs) from 17 to 61 ng g(-1) and a linear response range up to 10 μg g(-1). Total solvent consumption remained around 30 mL per sample. The analysis of non-spiked samples confirmed the sorption of significant amounts of several UV filters in sludge with average concentrations above 0.6 μg g(-1) for 3-(4-methylbenzylidene) camphor (4-MBC), 2-ethylhexyl-p-methoxycinnamate (EHMC) and octocrylene (OC). PMID:21144528

  17. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization

    PubMed Central

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742

  18. Security: Step by Step

    ERIC Educational Resources Information Center

    Svetcov, Eric

    2005-01-01

    This article provides a list of the essential steps to keeping a school's or district's network safe and sound. It describes how to establish a security architecture and approach that will continually evolve as the threat environment changes over time. The article discusses the methodology for implementing this approach and then discusses the…

  19. An IIR median hybrid filter

    NASA Technical Reports Server (NTRS)

    Bauer, Peter H.; Sartori, Michael A.; Bryden, Timothy M.

    1992-01-01

    A new class of nonlinear filters, the so-called class of multidirectional infinite impulse response median hybrid filters, is presented and analyzed. The input signal is processed twice using a linear shift-invariant infinite impulse response filtering module: once with normal causality and a second time with inverted causality. The final output of the MIMH filter is the median of the two-directional outputs and the original input signal. Thus, the MIMH filter is a concatenation of linear filtering and nonlinear filtering (a median filtering module). Because of this unique scheme, the MIMH filter possesses many desirable properties which are both proven and analyzed (including impulse removal, step preservation, and noise suppression). A comparison to other existing median type filters is also provided.

  20. Stack filters

    NASA Astrophysics Data System (ADS)

    Wendt, P. D.; Coyle, E. J.; Gallagher, N. C., Jr.

    1986-08-01

    A large class of easily implemented nonlinear filters called stack filters are discussed which includes the rank order operators in addition to the compositions of morphological operators. Techniques similar to those used to determine the root signal behavior of median filters are employed to study the convergence properties of the filters, and necessary conditions for a stack filter to preserve monotone regions or edges in signals, and the output distribution of the filters, are obtained. Among the stack filters of window width three are found asymmetric median filters in which one removes only positive going edges, the other removes only negative going edges, while the median filter removes impulses of both signs.

  1. Optimization of the polarized Klein tunneling currents in a sub-lattice: pseudo-spin filters and latticetronics in graphene ribbons.

    PubMed

    López, Luis I A; Yaro, Simeón Moisés; Champi, A; Ujevic, Sebastian; Mendoza, Michel

    2014-02-12

    We found that with an increase of the potential barrier applied to metallic graphene ribbons, the Klein tunneling current decreases until it is totally destroyed and the pseudo-spin polarization increases until it reaches its maximum value when the current is zero. This inverse relation disfavors the generation of polarized currents in a sub-lattice. In this work we discuss the pseudo-spin control (polarization and inversion) of the Klein tunneling currents, as well as the optimization of these polarized currents in a sub-lattice, using potential barriers in metallic graphene ribbons. Using density of states maps, conductance results, and pseudo-spin polarization information (all of them as a function of the energy V and width of the barrier L), we found (V, L) intervals in which the polarized currents in a given sub-lattice are maximized. We also built parallel and series configurations with these barriers in order to further optimize the polarized currents. A systematic study of these maps and barrier configurations shows that the parallel configurations are good candidates for optimization of the polarized tunneling currents through the sub-lattice. Furthermore, we discuss the possibility of using an electrostatic potential as (i) a pseudo-spin filter or (ii) a pseudo-spin inversion manipulator, i.e. a possible latticetronic of electronic currents through metallic graphene ribbons. The results of this work can be extended to graphene nanostructures. PMID:24441476

  2. Computation of maximum gust loads in nonlinear aircraft using a new method based on the matched filter approach and numerical optimization

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Heeg, Jennifer; Perry, Boyd, III

    1990-01-01

    Time-correlated gust loads are time histories of two or more load quantities due to the same disturbance time history. Time correlation provides knowledge of the value (magnitude and sign) of one load when another is maximum. At least two analysis methods have been identified that are capable of computing maximized time-correlated gust loads for linear aircraft. Both methods solve for the unit-energy gust profile (gust velocity as a function of time) that produces the maximum load at a given location on a linear airplane. Time-correlated gust loads are obtained by re-applying this gust profile to the airplane and computing multiple simultaneous load responses. Such time histories are physically realizable and may be applied to aircraft structures. Within the past several years there has been much interest in obtaining a practical analysis method which is capable of solving the analogous problem for nonlinear aircraft. Such an analysis method has been the focus of an international committee of gust loads specialists formed by the U.S. Federal Aviation Administration and was the topic of a panel discussion at the Gust and Buffet Loads session at the 1989 SDM Conference in Mobile, Alabama. The kinds of nonlinearities common on modern transport aircraft are indicated. The Statical Discrete Gust method is capable of being, but so far has not been, applied to nonlinear aircraft. To make the method practical for nonlinear applications, a search procedure is essential. Another method is based on Matched Filter Theory and, in its current form, is applicable to linear systems only. The purpose here is to present the status of an attempt to extend the matched filter approach to nonlinear systems. The extension uses Matched Filter Theory as a starting point and then employs a constrained optimization algorithm to attack the nonlinear problem.

  3. Characterization and optimization of 2-step MOVPE growth for single-mode DFB or DBR laser diodes

    NASA Astrophysics Data System (ADS)

    Bugge, F.; Mogilatenko, A.; Zeimer, U.; Brox, O.; Neumann, W.; Erbert, G.; Weyers, M.

    2011-01-01

    We have studied the MOVPE regrowth of AlGaAs over a grating for GaAs-based laser diodes with an internal wavelength stabilisation. Growth temperature and aluminium concentration in the regrown layers considerably affect the oxygen incorporation. Structural characterisation by transmission electron microscopy of the grating after regrowth shows the formation of quaternary InGaAsP regions due to the diffusion of indium atoms from the top InGaP layer and As-P exchange processes during the heating-up procedure. Additionally, the growth over such gratings with different facets leads to self-organisation of the aluminium content in the regrown AlGaAs layer, resulting in an additional AlGaAs grating, which has to be taken into account for the estimation of the coupling coefficient. With optimized growth conditions complete distributed feedback laser structures have been grown for different emission wavelengths. At 1062 nm a very high single-frequency output power of nearly 400 mW with a slope efficiency of 0.95 W/A for a 4 μm ridge-waveguide was obtained.

  4. Steps towards verification and validation of the Fetch code for Level 2 analysis, design, and optimization of aqueous homogeneous reactors

    SciTech Connect

    Nygaard, E. T.; Pain, C. C.; Eaton, M. D.; Gomes, J. L. M. A.; Goddard, A. J. H.; Gorman, G.; Tollit, B.; Buchan, A. G.; Cooling, C. M.; Angelo, P. L.

    2012-07-01

    Babcock and Wilcox Technical Services Group (B and W) has identified aqueous homogeneous reactors (AHRs) as a technology well suited to produce the medical isotope molybdenum 99 (Mo-99). AHRs have never been specifically designed or built for this specialized purpose. However, AHRs have a proven history of being safe research reactors. In fact, in 1958, AHRs had 'a longer history of operation than any other type of research reactor using enriched fuel' and had 'experimentally demonstrated to be among the safest of all various type of research reactor now in use [1].' While AHRs have been modeled effectively using simplified 'Level 1' tools, the complex interactions between fluids, neutronics, and solid structures are important (but not necessarily safety significant). These interactions require a 'Level 2' modeling tool. Imperial College London (ICL) has developed such a tool: Finite Element Transient Criticality (FETCH). FETCH couples the radiation transport code EVENT with the computational fluid dynamics code (Fluidity), the result is a code capable of modeling sub-critical, critical, and super-critical solutions in both two-and three-dimensions. Using FETCH, ICL researchers and B and W engineers have studied many fissioning solution systems include the Tokaimura criticality accident, the Y12 accident, SILENE, TRACY, and SUPO. These modeling efforts will ultimately be incorporated into FETCH'S extensive automated verification and validation (V and V) test suite expanding FETCH'S area of applicability to include all relevant physics associated with AHRs. These efforts parallel B and W's engineering effort to design and optimize an AHR to produce Mo99. (authors)

  5. A Kalman filter for a two-dimensional shallow-water model

    NASA Technical Reports Server (NTRS)

    Parrish, D. F.; Cohn, S. E.

    1985-01-01

    A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.

  6. Optimization of the performance of a thermophilic biotrickling filter for alpha-pinene removal from polluted air.

    PubMed

    Montes, M; Veiga, M C; Kennes, C

    2014-01-01

    Biodegradation of alpha-pinene was investigated in a biological thermophilic trickling filter, using a lava rock and polymer beads mixture as packing material. Partition coefficient (PC) between alpha-pinene and the polymeric material (Hytrel G3548 L) was measured at 50 degrees C. PCs of 57 and 846 were obtained between the polymer and either the water or the gas phase, respectively. BTF experiments were conducted under continuous load feeding. The effect of yeast extract (YE) addition in the recirculating nutrient medium was evaluated. There was a positive relationship between alpha-pinene biodegradation, CO2 production and YE addition. A maximum elimination capacity (ECmax) of 98.9 g m(-3) h(-1) was obtained for an alpha-pinene loading rate of about 121 g m(-3) h(-1) in the presence of 1 g L(-1) YE. The ECmax was reduced by half in the absence of YE. It was also found that a decrease in the liquid flow rate enhances alpha-pinene biodegradation by increasing the ECmax up to 103 gm(-3) h(-1) with a removal efficiency close to 90%. The impact of short-term shock-loads (6 h) was tested under different process conditions. Increasing the pollutant load either 10- or 20-fold resulted in a sudden drop in the BTF's removal capacity, although this effect was attenuated in the presence of YE. PMID:25145201

  7. Optimization and kinetic modeling of esterification of the oil obtained from waste plum stones as a pretreatment step in biodiesel production.

    PubMed

    Kostić, Milan D; Veličković, Ana V; Joković, Nataša M; Stamenković, Olivera S; Veljković, Vlada B

    2016-02-01

    This study reports on the use of oil obtained from waste plum stones as a low-cost feedstock for biodiesel production. Because of high free fatty acid (FFA) level (15.8%), the oil was processed through the two-step process including esterification of FFA and methanolysis of the esterified oil catalyzed by H2SO4 and CaO, respectively. Esterification was optimized by response surface methodology combined with a central composite design. The second-order polynomial equation predicted the lowest acid value of 0.53mgKOH/g under the following optimal reaction conditions: the methanol:oil molar ratio of 8.5:1, the catalyst amount of 2% and the reaction temperature of 45°C. The predicted acid value agreed with the experimental acid value (0.47mgKOH/g). The kinetics of FFA esterification was described by the irreversible pseudo first-order reaction rate law. The apparent kinetic constant was correlated with the initial methanol and catalyst concentrations and reaction temperature. The activation energy of the esterification reaction slightly decreased from 13.23 to 11.55kJ/mol with increasing the catalyst concentration from 0.049 to 0.172mol/dm(3). In the second step, the esterified oil reacted with methanol (methanol:oil molar ratio of 9:1) in the presence of CaO (5% to the oil mass) at 60°C. The properties of the obtained biodiesel were within the EN 14214 standard limits. Hence, waste plum stones might be valuable raw material for obtaining fatty oil for the use as alternative feedstock in biodiesel production. PMID:26706748

  8. DC-pass filter design with notch filters superposition for CPW rectenna at low power level

    NASA Astrophysics Data System (ADS)

    Rivière, J.; Douyère, A.; Alicalapa, F.; Luk, J.-D. Lan Sun

    2016-03-01

    In this paper the challenging coplanar waveguide direct current (DC) pass filter is designed, analysed, fabricated and measured. As the ground plane and the conductive line are etched on the same plane, this technology allows the connection of series and shunt elements to the active devices without via holes through the substrate. Indeed, this study presents the first step in the optimization of a complete rectenna in coplanar waveguide (CPW) technology: key element of a radio frequency (RF) energy harvesting system. The measurement of the proposed filter shows good performance in the rejection of F0=2.45 GHz and F1=4.9 GHz. Additionally, a harmonic balance (HB) simulation of the complete rectenna is performed and shows a maximum RF-to-DC conversion efficiency of 37% with the studied DC-pass filter for an input power of 10 µW at 2.45 GHz.

  9. Optimization of an analytical methodology for the simultaneous determination of different classes of ultraviolet filters in cosmetics by pressurized liquid extraction-gas chromatography tandem mass spectrometry.

    PubMed

    Vila, Marlene; Lamas, J Pablo; Garcia-Jares, Carmen; Dagnac, Thierry; Llompart, Maria

    2015-07-31

    A methodology based on pressurized liquid extraction (PLE) followed by gas chromatography-tandem mass spectrometry (GC-MS/MS) has been developed for the simultaneous analysis of different classes of UV filters including methoxycinnamates, benzophenones, salicylates, p-aminobenzoic acid derivatives, and others in cosmetic products. The extractions were carried out in 1mL extraction cells and the amount of sample extracted was only 100mg. The experimental conditions, including the acetylation of the PLE extracts to improve GC performance, were optimized by means of experimental design tools. The two main factors affecting the PLE procedure such as solvent type and extraction temperature were assessed. The use of a matrix matched approach consisting of the addition of 10μL of diluted commercial cosmetic oil avoided matrix effects. Good linearity (R(2)>0.9970), quantitative recoveries (>80% for most of compounds, excluding three banned benzophenones) and satisfactory precision (RSD<10% in most cases) were achieved under the optimal conditions. The validated methodology was successfully applied to the analysis of different types of cosmetic formulations including sunscreens, hair products, nail polish, and lipsticks, amongst others. PMID:26091782

  10. Estimation and filter stability of stochastic delay systems

    NASA Technical Reports Server (NTRS)

    Kwong, R. H.; Willsky, A. S.

    1978-01-01

    Linear and nonlinear filtering for stochastic delay systems are studied. A representation theorem for conditional moment functionals is obtained, which, in turn, is used to derive stochastic differential equations describing the optimal linear or nonlinear filter. A complete characterization of the optimal filter is given for linear systems with Gaussian noise. Stability of the optimal filter is studied in the case where there are no delays in the observations. Using the duality between linear filtering and control, asymptotic stability of the optimal filter is proved. Finally, the cascade of the optimal filter and the deterministic optimal quadratic control system is shown to be asymptotically stable as well.

  11. Modeling and optimization of ultrasound-assisted extraction of polyphenolic compounds from Aronia melanocarpa by-products from filter-tea factory.

    PubMed

    Ramić, Milica; Vidović, Senka; Zeković, Zoran; Vladić, Jelena; Cvejin, Aleksandra; Pavlić, Branimir

    2015-03-01

    Aronia melanocarpa by-product from filter-tea factory was used for the preparation of extracts with high content of bioactive compounds. Extraction process was accelerated using sonication. Three level, three variable face-centered cubic experimental design (FCD) with response surface methodology (RSM) was used for optimization of extraction in terms of maximized yields for total phenolics (TP), flavonoids (TF), anthocyanins (MA) and proanthocyanidins (TPA) contents. Ultrasonic power (X₁: 72-216 W), temperature (X₂: 30-70 °C) and extraction time (X₃: 30-90 min) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions for investigated responses. Three-dimensional surface plots were generated from the mathematical models. The optimal conditions for ultrasound-assisted extraction of TP, TF, MA and TPA were: X₁=206.64 W, X₂=70 °C, X₃=80.1 min; X₁=210.24 W, X₂=70 °C, X₃=75 min; X₁=216 W, X₂=70 °C, X₃=45.6 min and X₁=199.44 W, X₂=70 °C, X₃=89.7 min, respectively. Generated model predicted values of the TP, TF, MA and TPA to be 15.41 mg GAE/ml, 9.86 mg CE/ml, 2.26 mg C3G/ml and 20.67 mg CE/ml, respectively. Experimental validation was performed and close agreement between experimental and predicted values was found (within 95% confidence interval). PMID:25454824

  12. Next Step for STEP

    SciTech Connect

    Wood, Claire; Bremner, Brenda

    2013-08-09

    The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.

  13. Disk filter

    DOEpatents

    Bergman, Werner

    1986-01-01

    An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.

  14. Disk filter

    DOEpatents

    Bergman, W.

    1985-01-09

    An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.

  15. Design of withdrawal-weighted SAW filters.

    PubMed

    Lee, Youngjin; Lee, Seunghee; Roh, Yongrae

    2002-03-01

    This paper presents a new design algorithm for a withdrawal-weighted surface acoustic wave (SAW) transversal filter. The proposed algorithm is based on the effective transmission loss theory and a delta function model of a SAW transversal filter. The design process consists of three steps, which eventually determine eight geometrical design parameters for the filter in order to satisfy given performance specifications. First, the number of fingers in the input and output interdigital transducers (IDTs), plus their geometrical sizes is determined using the insertion loss specification. Second, the number and positions of the polarity reverses in the output IDT are determined using the bandwidth and ripple specifications. Third, the number and position for withdrawing and switching specific fingers in the output IDT and attached electrode area are determined to achieve the desired sidelobe level. The efficiency of the technique is illustrated using a sample design of an IF filter consisting of a uniform input IDT and withdrawal-weighted output IDT. The proposed algorithm is distinct from conventional techniques in that it can optimize the structural geometry of a withdrawal-weighted SAW filter in a direct manner by considering all the performance specifications simultaneously. PMID:12322883

  16. Rapid one-step purification of single-cells encapsulated in alginate microcapsules from oil to aqueous phase using a hydrophobic filter paper: implications for single-cell experiments.

    PubMed

    Lee, Do-Hyun; Jang, Miran; Park, Je-Kyun

    2014-10-01

    By virtue of the biocompatibility and physical properties of hydrogel, picoliter-sized hydrogel microcapsules have been considered to be a biometric signature containing several features similar to that of encapsulated single cells, including phenotype, viability, and intracellular content. To maximize the experimental potential of encapsulating cells in hydrogel microcapsules, a method that enables efficient hydrogel microcapsule purification from oil is necessary. Current methods based on centrifugation for the conventional stepwise rinsing of oil, are slow and laborious and decrease the monodispersity and yield of the recovered hydrogel microcapsules. To remedy these shortcomings we have developed a simple one-step method to purify alginate microcapsules, containing a single live cell, from oil to aqueous phase. This method employs oil impregnation using a commercially available hydrophobic filter paper without multistep centrifugal purification and complicated microchannel networks. The oil-suspended alginate microcapsules encapsulating single cells from mammalian cancer cell lines (MCF-7, HepG2, and U937) and microorganisms (Chlorella vulgaris) were successfully exchanged to cell culture media by quick (~10 min) depletion of the surrounding oil phase without coalescence of neighboring microcapsules. Cell proliferation and high integrity of the microcapsules were also demonstrated by long-term incubation of microcapsules containing a single live cell. We expect that this method for the simple and rapid purification of encapsulated single-cell microcapsules will attain widespread adoption, assisting cell biologists and clinicians in the development of single-cell experiments. PMID:25130499

  17. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip

    PubMed Central

    Davison, James A

    2015-01-01

    Purpose To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. Patients and methods A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Results Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. Conclusion A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer. PMID:26355695

  18. Water Filters

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Aquaspace H2OME Guardian Water Filter, available through Western Water International, Inc., reduces lead in water supplies. The filter is mounted on the faucet and the filter cartridge is placed in the "dead space" between sink and wall. This filter is one of several new filtration devices using the Aquaspace compound filter media, which combines company developed and NASA technology. Aquaspace filters are used in industrial, commercial, residential, and recreational environments as well as by developing nations where water is highly contaminated.

  19. Optimal State Estimation for Cavity Optomechanical Systems.

    PubMed

    Wieczorek, Witlef; Hofer, Sebastian G; Hoelscher-Obermaier, Jason; Riedinger, Ralf; Hammerer, Klemens; Aspelmeyer, Markus

    2015-06-01

    We demonstrate optimal state estimation for a cavity optomechanical system through Kalman filtering. By taking into account nontrivial experimental noise sources, such as colored laser noise and spurious mechanical modes, we implement a realistic state-space model. This allows us to obtain the conditional system state, i.e., conditioned on previous measurements, with a minimal least-squares estimation error. We apply this method to estimate the mechanical state, as well as optomechanical correlations both in the weak and strong coupling regime. The application of the Kalman filter is an important next step for achieving real-time optimal (classical and quantum) control of cavity optomechanical systems. PMID:26196621

  20. High accuracy motor controller for positioning optical filters in the CLAES Spectrometer

    NASA Technical Reports Server (NTRS)

    Thatcher, John B.

    1989-01-01

    The Etalon Drive Motor (EDM), a precision etalon control system designed for accurate positioning of etalon filters in the IR spectrometer of the Cryogenic Limb Array Etalon Spectrometer (CLAES) experiment is described. The EDM includes a brushless dc torque motor, which has an infinite resolution for setting an etalon filter to any desired angle, a four-filter etalon wheel, and an electromechanical resolver for angle information. An 18-bit control loop provides high accuracy, resolution, and stability. Dynamic computer interaction allows the user to optimize the step response. A block diagram of the motor controller is presented along with a schematic of the digital/analog converter circuit.

  1. Biological Filters.

    ERIC Educational Resources Information Center

    Klemetson, S. L.

    1978-01-01

    Presents the 1978 literature review of wastewater treatment. The review is concerned with biological filters, and it covers: (1) trickling filters; (2) rotating biological contractors; and (3) miscellaneous reactors. A list of 14 references is also presented. (HM)

  2. Analysis of plasticizers in poly(vinyl chloride) medical devices for infusion and artificial nutrition: comparison and optimization of the extraction procedures, a pre-migration test step.

    PubMed

    Bernard, Lise; Cueff, Régis; Bourdeaux, Daniel; Breysse, Colette; Sautou, Valérie

    2015-02-01

    Medical devices (MDs) for infusion and enteral and parenteral nutrition are essentially made of plasticized polyvinyl chloride (PVC). The first step in assessing patient exposure to these plasticizers, as well as ensuring that the MDs are free from di(2-ethylhexyl) phthalate (DEHP), consists of identifying and quantifying the plasticizers present and, consequently, determining which ones are likely to migrate into the patient's body. We compared three different extraction methods using 0.1 g of plasticized PVC: Soxhlet extraction in diethyl ether and ethyl acetate, polymer dissolution, and room temperature extraction in different solvents. It was found that simple room temperature chloroform extraction under optimized conditions (30 min, 50 mL) gave the best separation of plasticizers from the PVC matrix, with extraction yields ranging from 92 to 100% for all plasticizers. This result was confirmed by supplemented Fourier transform infrared spectroscopy-attenuated total reflection (FTIR-ATR) and gravimetric analyses. The technique was used on eight marketed medical devices and showed that they contained different amounts of plasticizers, ranging from 25 to 36% of the PVC weight. These yields, associated with the individual physicochemical properties of each plasticizer, highlight the need for further migration studies. PMID:25577357

  3. Metallic Filters

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Filtration technology originated in a mid 1960's NASA study. The results were distributed to the filter industry, an HR Textron responded, using the study as a departure for the development of 421 Filter Media. The HR system is composed of ultrafine steel fibers metallurgically bonded and compressed so that the pore structure is locked in place. The filters are used to filter polyesters, plastics, to remove hydrocarbon streams, etc. Several major companies use the product in chemical applications, pollution control, etc.

  4. Water Filters

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A compact, lightweight electrolytic water filter generates silver ions in concentrations of 50 to 100 parts per billion in the water flow system. Silver ions serve as effective bactericide/deodorizers. Ray Ward requested and received from NASA a technical information package on the Shuttle filter, and used it as basis for his own initial development, a home use filter.

  5. FILTER TREATMENT

    DOEpatents

    Sutton, J.B.; Torrey, J.V.P.

    1958-08-26

    A process is described for reconditioning fused alumina filters which have become clogged by the accretion of bismuth phosphate in the filter pores, The method consists in contacting such filters with faming sulfuric acid, and maintaining such contact for a substantial period of time.

  6. A new balancing three level three dimensional space vector modulation strategy for three level neutral point clamped four leg inverter based shunt active power filter controlling by nonlinear back stepping controllers.

    PubMed

    Chebabhi, Ali; Fellah, Mohammed Karim; Kessal, Abdelhalim; Benkhoris, Mohamed F

    2016-07-01

    In this paper is proposed a new balancing three-level three dimensional space vector modulation (B3L-3DSVM) strategy which uses a redundant voltage vectors to realize precise control and high-performance for a three phase three-level four-leg neutral point clamped (NPC) inverter based Shunt Active Power Filter (SAPF) for eliminate the source currents harmonics, reduce the magnitude of neutral wire current (eliminate the zero-sequence current produced by single-phase nonlinear loads), and to compensate the reactive power in the three-phase four-wire electrical networks. This strategy is proposed in order to gate switching pulses generation, dc bus voltage capacitors balancing (conserve equal voltage of the two dc bus capacitors), and to switching frequency reduced and fixed of inverter switches in same times. A Nonlinear Back Stepping Controllers (NBSC) are used for regulated the dc bus voltage capacitors and the SAPF injected currents to robustness, stabilizing the system and to improve the response and to eliminate the overshoot and undershoot of traditional PI (Proportional-Integral). Conventional three-level three dimensional space vector modulation (C3L-3DSVM) and B3L-3DSVM are calculated and compared in terms of error between the two dc bus voltage capacitors, SAPF output voltages and THDv, THDi of source currents, magnitude of source neutral wire current, and the reactive power compensation under unbalanced single phase nonlinear loads. The success, robustness, and the effectiveness of the proposed control strategies are demonstrated through simulation using Sim Power Systems and S-Function of MATLAB/SIMULINK. PMID:27018144

  7. Optimization of a two-step process comprising lipase catalysis and thermal cyclization improves the efficiency of synthesis of six-membered cyclic carbonate from trimethylolpropane and dimethylcarbonate.

    PubMed

    Bornadel, Amin; Hatti-Kaul, Rajni; Sörensen, Kent; Lundmark, Stefan; Pyo, Sang-Hyun

    2013-01-01

    Six-membered cyclic carbonates are potential monomers for phosgene and/or isocyanate free polycarbonates and polyurethanes via ring-opening polymerization. A two-step process for their synthesis comprising lipase-catalyzed transesterification of a polyol, trimethylolpropane (TMP) with dimethylcarbonate (DMC) in a solvent-free system followed by thermal cyclization was optimized to improve process efficiency and selectivity. Using full factorial designed experiments and partial least squares (PLS) modeling for the reaction catalyzed by Novozym®435 (N435; immobilized Candida antarctica lipase B), the optimum conditions for obtaining either high proportion of monocarbonated TMP and TMP-cyclic-carbonate (3 and 4), or dicarbonated TMP and monocarbonated TMP-cyclic-carbonate (5 and 6) were found. The PLS model predicted that the reactions using 15%-20% (w/w) N435 at DMC:TMP molar ratio of 10-30 can reach about 65% total yield of 3 and 4 within 10 h, and 65%-70% total yield of 5 and 6 within 32-37 h, respectively. High consistency between the predicted results and empirical data was shown with 66.1% yield of 3 and 4 at 7 h and 67.4% yield of 5 and 6 at 35 h, using 18% (w/w) biocatalyst and DMC:TMP molar ratio of 20. Thermal cyclization of the product from 7 h reaction, at 110°C in the presence of acetonitrile increased the overall yield of cyclic carbonate 4 from about 2% to more than 75% within 24 h. N435 was reused for five consecutive batches, 10 h each, to give 3+4 with a yield of about 65% in each run. PMID:23125051

  8. Optimizing the flattening filter free beam selection in RapidArc®-based stereotactic body radiotherapy for Stage I lung cancer

    PubMed Central

    Lu, J-Y; Lin, Z; Lin, P-X

    2015-01-01

    Objective: To optimize the flattening filter-free (FFF) beam selection in stereotactic body radiotherapy (SBRT) treatment for Stage I lung cancer in different fraction schemes. Methods: Treatment plans from 12 patients suffering from Stage I lung cancer were designed using the 6XFFF and 10XFFF beams in different fraction schemes of 4 × 12, 3 × 18 and 1 × 34 Gy. Plans were evaluated mainly in terms of organs at risk (OARs) sparing, normal tissue complication probability (NTCP) estimation and treatment efficiency. Results: Compared with the 10XFFF beam, 6XFFF beam showed statistically significant lower dose to all the OARs investigated. The percentage of NTCP reduction for both lung and chest wall was about 10% in the fraction schemes of 4 × 12 and 3 × 18 Gy, whereas only 7.4% and 2.6% was obtained in the 1 × 34 Gy scheme. For oesophagus, heart and spinal cord, the reduction was greater with the 6XFFF beam, but their absolute estimates were <10−6%. The mean beam-on time for 6XFFF and 10XFFF beams at 4 × 12, 3 × 18 and 1 × 34 Gy schemes were 2.2 ± 0.2 vs 1.5 ± 0.1, 3.3 ± 0.9 vs 2.0 ± 0.5 and 6.3 ± 0.9 vs 3.5 ± 0.4 min, respectively. Conclusion: The 6XFFF beam obtains better OARs sparing and lower incidence of NTCP in SBRT treatment of Stage I lung cancer, whereas the 10XFFF beam improves the treatment efficiency. To balance the OARs sparing and intrafractional variation owing to the prolonged treatment time, the authors recommend using the 6XFFF beam in the 4 × 12 and 3 × 18 Gy schemes but the 10XFFF beam in the 1 × 34 Gy scheme. Advances in knowledge: This study optimizes the FFF beam selection in different fraction schemes in SBRT treatment of Stage I lung cancer. PMID:26133073

  9. Performance evaluation of the GA/SA hybrid heuristic optimum filter for optical pattern recognition

    NASA Astrophysics Data System (ADS)

    Yeun, Jin S.; Kim, Nam; Pan, Jae Kyung; Kim, R. S.; Um, J. U.; Kim, Sang H.

    1997-04-01

    In this paper, we newly apply a genetic and simulated annealing hybrid heuristic to encode optimal filter for optical pattern recognition. Simulated annealing as a stochastic computational technique allows for finding near globally-minimum-cost solutions with cooling schedule. Using the advantages of a parallelizable genetic algorithm (GA) and a simulated annealing algorithm (SA), the optimum filters are designed and implemented. The filter having 128 multiplied by 128 pixel size consists of the stepped phase that causes the discrete phase delay. The structure of this can be divided into rectangular cells such that each cell imparts a discrete phase delay of 0 approximately equals 2 pi[rad] to the incident wave front. Eight-phase stepped filters that we designed are compared with phase only matched filter and cosine-binary phase only filter. It is deeply focused on investigating the performance of the optimum filter in terms of recognition characteristics on the translation, scale and rotation variations of the image, and discrimination properties against similar images. By GA/SA hybrid heuristic, the optimum filter is realized for high efficiency optical reconstruction in spite of decreasing iteration number needed to encode it by respective algorithms.

  10. Game-theoretic Kalman Filter

    NASA Astrophysics Data System (ADS)

    Colburn, Christopher; Bewley, Thomas

    2010-11-01

    The Kalman Filter (KF) is celebrated as the optimal estimator for systems with linear dynamics and gaussian uncertainty. Although most systems of interest do not have linear dynamics and are not forced by gaussian noise, the KF is used ubiquitously within industry. Thus, we present a novel estimation algorithm, the Game-theoretic Kalman Filter (GKF), which intelligently hedges between competing sequential filters and does not require the assumption of gaussian statistics to provide a "best" estimate.

  11. Filtering apparatus

    DOEpatents

    Haldipur, Gaurang B.; Dilmore, William J.

    1992-01-01

    A vertical vessel having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas.

  12. Filtering apparatus

    DOEpatents

    Haldipur, G.B.; Dilmore, W.J.

    1992-09-01

    A vertical vessel is described having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas. 18 figs.

  13. Kaon Filtering For CLAS Data

    SciTech Connect

    McNabb, J.

    2001-01-30

    The analysis of data from CLAS is a multi-step process. After the detectors for a given running period have been calibrated, the data is processed in the so called pass-1 cooking. During the pass-1 cooking each event is reconstructed by the program a1c which finds particle tracks and computes momenta from the raw data. The results are then passed on to several data monitoring and filtering utilities. In CLAS software, a filter is a parameterless function which returns an integer indicating whether an event should be kept by that filter or not. There is a main filter program called g1-filter which controls several specific filters and outputs several files, one for each filter. These files may then be analyzed separately, allowing individuals interested in one reaction channel to work from smaller files than using the whole data set would require. There are several constraints on what the filter functions should do. Obviously, the filtered files should be as small as possible, however the filter should also not reject any events that might be used in the later analysis for which the filter was intended.

  14. Evaluation of various speckle reduction filters on medical ultrasound images.

    PubMed

    Wu, Shibin; Zhu, Qingsong; Xie, Yaoqin

    2013-01-01

    At present, ultrasound is one of the essential tools for noninvasive medical diagnosis. However, speckle noise is inherent in medical ultrasound images and it is the cause for decreased resolution and contrast-to-noise ratio. Low image quality is an obstacle for effective feature extraction, recognition, analysis, and edge detection; it also affects image interpretation by doctor and the accuracy of computer-assisted diagnostic techniques. Thus, speckle reduction is significant and critical step in pre-processing of ultrasound images. Many speckle reduction techniques have been studied by researchers, but to date there is no comprehensive method that takes all the constraints into consideration. In this paper we discuss seven filters, namely Lee, Frost, Median, Speckle Reduction Anisotropic Diffusion (SRAD), Perona-Malik's Anisotropic Diffusion (PMAD) filter, Speckle Reduction Bilateral Filter (SRBF) and Speckle Reduction filter based on soft thresholding in the Wavelet transform. A comparative study of these filters has been made in terms of preserving the features and edges as well as effectiveness of de-noising.We computed five established evaluation metrics in order to determine which despeckling algorithm is most effective and optimal for real-time implementation. In addition, the experimental results have been demonstrated by filtered images and statistical data table. PMID:24109896

  15. Hot-gas filter manufacturing assessments: Volume 5. Final report, April 15, 1997

    SciTech Connect

    Boss, D.E.

    1997-12-31

    The development of advanced filtration media for advanced fossil-fueled power generating systems is a critical step in meeting the performance and emissions requirements for these systems. While porous metal and ceramic candle-filters have been available for some time, the next generation of filters will include ceramic-matrix composites (CMCs), intermetallic alloys, and alternate filter geometries. The goal of this effort was to perform a cursory review of the manufacturing processes used by 5 companies developing advanced filters from the perspective of process repeatability and the ability for their processes to be scale-up to production volumes. It was found that all of the filter manufacturers had a solid understanding of the product development path. Given that these filters are largely developmental, significant additional work is necessary to understand the process-performance relationships and projecting manufacturing costs. While each organization had specific needs, some common among all of the filter manufacturers were access to performance testing of the filters to aide process/product development, a better understanding of the stresses the filters will see in service for use in structural design of the components, and a strong process sensitivity study to allow optimization of processing.

  16. SU-E-T-591: Optimizing the Flattening Filter Free Beam Selection in RapidArc-Based Stereotactic Body Radiotherapy for Stage I Lung Cancer

    SciTech Connect

    Huang, B-T; Lu, J-Y

    2015-06-15

    Purpose: To optimize the flattening filter free (FFF) beam energy selection in stereotactic body radiotherapy (SBRT) treatment for stage I lung cancer with different fraction schemes. Methods: Twelve patients suffering from stage I lung cancer were enrolled in this study. Plans were designed using 6XFFF and 10XFFF beams with the most widely used fraction schemes of 4*12 Gy, 3*18 Gy and 1*34 Gy, respectively. The plan quality was appraised in terms of planning target volume (PTV) coverage, conformity of the prescribed dose (CI100%), intermediate dose spillage (R50% and D2cm), organs at risk (OARs) sparing and beam-on time. Results: The 10XFFF beam predicted 1% higher maximum, mean dose to the PTV and 4–5% higher R50% compared with the 6XFFF beam in the three fraction schemes, whereas the CI100% and D2cm was similar. Most importantly, the 6XFFF beam exhibited 3–10% lower dose to all the OARs. However, the 10XFFF beam reduced the beam-on time by 31.9±7.2%, 38.7±2.8% and 43.6±4.0% compared with the 6XFFF beam in the 4*12 Gy, 3*18 Gy and 1*34 Gy schemes, respectively. Beam-on time was 2.2±0.2 vs 1.5±0.1, 3.3±0.9 vs 2.0±0.5 and 6.3±0.9 vs 3.5±0.4 minutes for the 6XFFF and 10XFFF one in the three fraction schemes. Conclusion: The 6XFFF beam obtains better OARs sparing in SBRT treatment for stage I lung cancer, but the 10XFFF one provides improved treatment efficiency. To balance the OARs sparing and intrafractional variation as a function of prolonged treatment time, the authors recommend to use the 6XFFF beam in the 4*12 Gy and 3*18 Gy schemes for better OARs sparing. However, for the 1*34 Gy scheme, the 10XFFF beam is recommended to achieve improved treatment efficiency.

  17. Neutral density filters with Risley prisms: analysis and design.

    PubMed

    Duma, Virgil-Florin; Nicolov, Mirela

    2009-05-10

    We achieve the analysis and design of optical attenuators with double-prism neutral density filters. A comparative study is performed on three possible device configurations; only two are presented in the literature but without their design calculus. The characteristic parameters of this optical attenuator with Risley translating prisms for each of the three setups are defined and their analytical expressions are derived: adjustment scale (attenuation range) and interval, minimum transmission coefficient and sensitivity. The setups are compared to select the optimal device, and, from this study, the best solution for double-prism neutral density filters, both from a mechanical and an optical point of view, is determined with two identical, symmetrically movable, no mechanical contact prisms. The design calculus of this optimal device is developed in essential steps. The parameters of the prisms, particularly their angles, are studied to improve the design, and we demonstrate the maximum attenuation range that this type of attenuator can provide. PMID:19424388

  18. A step-by-step guide to systematically identify all relevant animal studies

    PubMed Central

    Leenaars, Marlies; Hooijmans, Carlijn R; van Veggel, Nieky; ter Riet, Gerben; Leeflang, Mariska; Hooft, Lotty; van der Wilt, Gert Jan; Tillema, Alice; Ritskes-Hoitinga, Merel

    2012-01-01

    Before starting a new animal experiment, thorough analysis of previously performed experiments is essential from a scientific as well as from an ethical point of view. The method that is most suitable to carry out such a thorough analysis of the literature is a systematic review (SR). An essential first step in an SR is to search and find all potentially relevant studies. It is important to include all available evidence in an SR to minimize bias and reduce hampered interpretation of experimental outcomes. Despite the recent development of search filters to find animal studies in PubMed and EMBASE, searching for all available animal studies remains a challenge. Available guidelines from the clinical field cannot be copied directly to the situation within animal research, and although there are plenty of books and courses on searching the literature, there is no compact guide available to search and find relevant animal studies. Therefore, in order to facilitate a structured, thorough and transparent search for animal studies (in both preclinical and fundamental science), an easy-to-use, step-by-step guide was prepared and optimized using feedback from scientists in the field of animal experimentation. The step-by-step guide will assist scientists in performing a comprehensive literature search and, consequently, improve the scientific quality of the resulting review and prevent unnecessary animal use in the future. PMID:22037056

  19. Stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  20. PHOEBE - step by step manual

    NASA Astrophysics Data System (ADS)

    Zasche, P.

    2016-03-01

    An easy step-by-step manual of PHOEBE is presented. It should serve as a starting point for the first time users of PHOEBE analyzing the eclipsing binary light curve. It is demonstrated on one particular detached system also with the downloadable data and the whole procedure is described easily till the final trustworthy fit is being reached.

  1. PWM control techniques for rectifier filter minimization

    SciTech Connect

    Ziogas, P.D.; Kang, Y-G; Stefanovic, V.R.

    1985-09-01

    Minimization of input/output filters is an essential step towards manufacturing compact low-cost static power supplies. Three PWM control techniques that yield substantial filter size reduction for three-phase (self-commutated) rectifiers are presented and analyzed. Filters required by typical line-commutated rectifiers are used as the basis for comparison. Moreover, it is shown that in addition to filter minimization two of the proposed three control techniques improve substantially the rectifier total input power factor.

  2. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  3. Step Pultrusion

    NASA Astrophysics Data System (ADS)

    Langella, A.; Carbone, R.; Durante, M.

    2012-12-01

    The pultrusion process is an efficient technology for the production of composite material profiles. Thanks to this positive feature, several studies have been carried out, either to expand the range of products made using the pultrusion technology, or improve its already high production rate. This study presents a process derived from the traditional pultrusion technology named "Step Pultrusion Process Technology" (SPPT). Using the step pultrusion process, the final section of the composite profiles is obtainable by means of a progressive cross section increasing through several resin cure stations. This progressive increasing of the composite cross section means that a higher degree of cure level can be attained at the die exit point of the last die. Mechanical test results of the manufactured pultruded samples have been used to compare both the traditional and the step pultrusion processes. Finally, there is a discussion on ways to improve the new step pultrusion process even further.

  4. High-resolution wave-theory-based ultrasound reflection imaging using the split-step fourier and globally optimized fourier finite-difference methods

    SciTech Connect

    Huang, Lianjie

    2013-10-29

    Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Results from various data input to the method indicate significant improvements are provided in both image quality and resolution.

  5. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  6. Aquatic Plants Aid Sewage Filter

    NASA Technical Reports Server (NTRS)

    Wolverton, B. C.

    1985-01-01

    Method of wastewater treatment combines micro-organisms and aquatic plant roots in filter bed. Treatment occurs as liquid flows up through system. Micro-organisms, attached themselves to rocky base material of filter, act in several steps to decompose organic matter in wastewater. Vascular aquatic plants (typically, reeds, rushes, cattails, or water hyacinths) absorb nitrogen, phosphorus, other nutrients, and heavy metals from water through finely divided roots.

  7. Filtering in SPECT Image Reconstruction

    PubMed Central

    Lyra, Maria; Ploussi, Agapi

    2011-01-01

    Single photon emission computed tomography (SPECT) imaging is widely implemented in nuclear medicine as its clinical role in the diagnosis and management of several diseases is, many times, very helpful (e.g., myocardium perfusion imaging). The quality of SPECT images are degraded by several factors such as noise because of the limited number of counts, attenuation, or scatter of photons. Image filtering is necessary to compensate these effects and, therefore, to improve image quality. The goal of filtering in tomographic images is to suppress statistical noise and simultaneously to preserve spatial resolution and contrast. The aim of this work is to describe the most widely used filters in SPECT applications and how these affect the image quality. The choice of the filter type, the cut-off frequency and the order is a major problem in clinical routine. In many clinical cases, information for specific parameters is not provided, and findings cannot be extrapolated to other similar SPECT imaging applications. A literature review for the determination of the mostly used filters in cardiac, brain, bone, liver, kidneys, and thyroid applications is also presented. As resulting from the overview, no filter is perfect, and the selection of the proper filters, most of the times, is done empirically. The standardization of image-processing results may limit the filter types for each SPECT examination to certain few filters and some of their parameters. Standardization, also, helps in reducing image processing time, as the filters and their parameters must be standardised before being put to clinical use. Commercial reconstruction software selections lead to comparable results interdepartmentally. The manufacturers normally supply default filters/parameters, but these may not be relevant in various clinical situations. After proper standardisation, it is possible to use many suitable filters or one optimal filter. PMID:21760768

  8. A novel band-pass filter based on a periodically drilled SIW structure

    NASA Astrophysics Data System (ADS)

    Coves, A.; Torregrosa-Penalva, G.; San-Blas, A. A.; Sánchez-Soriano, M. A.; Martellosio, A.; Bronchalo, E.; Bozzi, M.

    2016-04-01

    The design and fabrication of a band-pass step impedance filter based on high and low dielectric constant sections has been realized on substrate integrated waveguide (SIW) technology. The overall process includes the design of the ideal band-pass prototype filter, where the implementation of the impedance inverters has been carried out by means of waveguide sections of lower permittivity. This can be practically achieved by implementing arrays of air holes along the waveguide. Several SIW structures with and without arrays of air holes have been simulated and fabricated in order to experimentally evaluate their relative permittivity. Additionally, the equivalent filter in SIW technology has been designed and optimized. Finally, a prototype of the designed filter has been fabricated and measured, showing a good agreement between measurements and simulations, which demonstrates the validity of the proposed design approach.

  9. Modelling of diffraction grating based optical filters for fluorescence detection of biomolecules.

    PubMed

    Kovačič, M; Krč, J; Lipovšek, B; Topič, M

    2014-07-01

    The detection of biomolecules based on fluorescence measurements is a powerful diagnostic tool for the acquisition of genetic, proteomic and cellular information. One key performance limiting factor remains the integrated optical filter, which is designed to reject strong excitation light while transmitting weak emission (fluorescent) light to the photodetector. Conventional filters have several disadvantages. For instance absorbing filters, like those made from amorphous silicon carbide, exhibit low rejection ratios, especially in the case of small Stokes' shift fluorophores (e.g. green fluorescent protein GFP with λ exc = 480 nm and λ em = 510 nm), whereas interference filters comprising many layers require complex fabrication. This paper describes an alternative solution based on dielectric diffraction gratings. These filters are not only highly efficient but require a smaller number of manufacturing steps. Using FEM-based optical modelling as a design optimization tool, three filtering concepts are explored: (i) a diffraction grating fabricated on the surface of an absorbing filter, (ii) a diffraction grating embedded in a host material with a low refractive index, and (iii) a combination of an embedded grating and an absorbing filter. Both concepts involving an embedded grating show high rejection ratios (over 100,000) for the case of GFP, but also high sensitivity to manufacturing errors and variations in the incident angle of the excitation light. Despite this, simulations show that a 60 times improvement in the rejection ratio relative to a conventional flat absorbing filter can be obtained using an optimized embedded diffraction grating fabricated on top of an absorbing filter. PMID:25071964

  10. Microwave assisted biodiesel production from Jatropha curcas L. seed by two-step in situ process: optimization using response surface methodology.

    PubMed

    Jaliliannosrati, Hamidreza; Amin, Nor Aishah Saidina; Talebian-Kiakalaieh, Amin; Noshadi, Iman

    2013-05-01

    The synthesis of fatty acid ethyl esters (FAEEs) by a two-step in situ (reactive) esterification/transesterification from Jatropha curcas L. (JCL) seeds using microwave system has been investigated. Free fatty acid was reduced from 14% to less than 1% in the first step using H2SO4 as acid catalyst after 35 min of microwave irradiation heating. The organic phase in the first step was subjected to a second reaction by adding 5 N KOH in ethanol as the basic catalyst. Response surface methodology (RSM) based on central composite design (CCD) was utilized to design the experiments and analyze the influence of process variables (particles seed size, time of irradiation, agitation speed and catalyst loading) on conversion of triglycerides (TGs) in the second step. The highest triglycerides conversion to fatty acid ethyl esters (FAEEs) was 97.29% at the optimum conditions:<0.5mm seed size, 12.21 min irradiation time, 8.15 ml KOH catalyst loading and 331.52 rpm agitation speed in the 110 W microwave power system. PMID:23567732

  11. Bioaerosol DNA Extraction Technique from Air Filters Collected from Marine and Freshwater Locations

    NASA Astrophysics Data System (ADS)

    Beckwith, M.; Crandall, S. G.; Barnes, A.; Paytan, A.

    2015-12-01

    Bioaerosols are composed of microorganisms suspended in air. Among these organisms include bacteria, fungi, virus, and protists. Microbes introduced into the atmosphere can drift, primarily by wind, into natural environments different from their point of origin. Although bioaerosols can impact atmospheric dynamics as well as the ecology and biogeochemistry of terrestrial systems, very little is known about the composition of bioaerosols collected from marine and freshwater environments. The first step to determine composition of airborne microbes is to successfully extract environmental DNA from air filters. We asked 1) can DNA be extracted from quartz (SiO2) air filters? and 2) how can we optimize the DNA yield for downstream metagenomic sequencing? Aerosol filters were collected and archived on a weekly basis from aquatic sites (USA, Bermuda, Israel) over the course of 10 years. We successfully extracted DNA from a subsample of ~ 20 filters. We modified a DNA extraction protocol (Qiagen) by adding a beadbeating step to mechanically shear cell walls in order to optimize our DNA product. We quantified our DNA yield using a spectrophotometer (Nanodrop 1000). Results indicate that DNA can indeed be extracted from quartz filters. The additional beadbeating step helped increase our yield - up to twice as much DNA product was obtained compared to when this step was omitted. Moreover, bioaerosol DNA content does vary across time. For instance, the DNA extracted from filters from Lake Tahoe, USA collected near the end of June decreased from 9.9 ng/μL in 2007 to 3.8 ng/μL in 2008. Further next-generation sequencing analysis of our extracted DNA will be performed to determine the composition of these microbes. We will also model the meteorological and chemical factors that are good predictors for microbial composition for our samples over time and space.

  12. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays.

    PubMed

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2016-01-01

    This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices. PMID:27338387

  13. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays

    PubMed Central

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2016-01-01

    This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices. PMID:27338387

  14. Water Filters

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Seeking to find a more effective method of filtering potable water that was highly contaminated, Mike Pedersen, founder of Western Water International, learned that NASA had conducted extensive research in methods of purifying water on board manned spacecraft. The key is Aquaspace Compound, a proprietary WWI formula that scientifically blends various types of glandular activated charcoal with other active and inert ingredients. Aquaspace systems remove some substances; chlorine, by atomic adsorption, other types of organic chemicals by mechanical filtration and still others by catalytic reaction. Aquaspace filters are finding wide acceptance in industrial, commercial, residential and recreational applications in the U.S. and abroad.

  15. Filter selection based on light source for multispectral imaging

    NASA Astrophysics Data System (ADS)

    Xu, Peng; Xu, Haisong

    2016-07-01

    In multispectral imaging, it is necessary to select a reduced number of filters to balance the imaging efficiency and spectral reflectance recovery accuracy. Due to the combined effect of filters and light source on reflectance recovery, the optimal filters are influenced by the employed light source in the multispectral imaging system. By casting the filter selection as an optimization issue, the selection of optimal filters corresponding to the employed light source proceeds with respect to a set of target samples utilizing one kind of genetic algorithms, regardless of the detailed spectral characteristics of the light source, filters, and sensor. Under three light sources with distinct spectral power distributions, the proposed filter selection method was evaluated on a filter-wheel based multispectral device with a set of interference filters. It was verified that the filters derived by the proposed method achieve better spectral and colorimetric accuracy of reflectance recovery than the conventional one under different light sources.

  16. Influence of multi-step heat treatments in creep age forming of 7075 aluminum alloy: Optimization for springback, strength and exfoliation corrosion

    SciTech Connect

    Arabi Jeshvaghani, R.; Zohdi, H.; Shahverdi, H.R.; Bozorg, M.; Hadavi, S.M.M.

    2012-11-15

    Multi-step heat treatments comprise of high temperature forming (150 Degree-Sign C/24 h plus 190 Degree-Sign C for several minutes) and subsequent low temperature forming (120 Degree-Sign C for 24 h) is developed in creep age forming of 7075 aluminum alloy to decrease springback and exfoliation corrosion susceptibility without reduction in tensile properties. The results show that the multi-step heat treatment gives the low springback and the best combination of exfoliation corrosion resistance and tensile strength. The lower springback is attributed to the dislocation recovery and more stress relaxation at higher temperature. Transmission electron microscopy observations show that corrosion resistance is improved due to the enlargement in the size and the inter-particle distance of the grain boundaries precipitates. Furthermore, the achievement of the high strength is related to the uniform distribution of ultrafine {eta} Prime precipitates within grains. - Highlights: Black-Right-Pointing-Pointer Creep age forming developed for manufacturing of aircraft wing panels by aluminum alloy. Black-Right-Pointing-Pointer A good combination of properties with minimal springback is required in this component. Black-Right-Pointing-Pointer This requirement can be improved through the appropriate heat treatments. Black-Right-Pointing-Pointer Multi-step cycles developed in creep age forming of AA7075 for improving of springback and properties. Black-Right-Pointing-Pointer Results indicate simultaneous enhancing the properties and shape accuracy (lower springback).

  17. Optimization of medium for one-step fermentation of inulin extract from Jerusalem artichoke tubers using Paenibacillus polymyxa ZJ-9 to produce R,R-2,3-butanediol.

    PubMed

    Gao, Jian; Xu, Hong; Li, Qiu-jie; Feng, Xiao-hai; Li, Sha

    2010-09-01

    The medium for one-step fermentation of raw inulin extract from Jerusalem artichoke tubers by Paenibacillus polymyxa ZJ-9 to produce R,R-2,3-butanediol (R,R-2,3-BD) was developed. Inulin, K(2)HPO(4) and NH(4)Cl were found to be the key factors in the fermentation according to the results obtained from the Plackett-Burman experimental design. The optimal concentration range of the three factors was examined by the steepest ascent path, and their optimal concentration was further investigated according to the Box-Behnken design and determined to be 77.14 g/L, 3.09 g/L and 0.93 g/L, respectively. Under the optimal conditions, the concentration of the obtained R,R-2,3-BD was 36.92 g/L, at more than 98% optical purity. Compared with other investigated carbon resources, fermentation of the raw inulin extract afforded the highest yield of R,R-2,3-BD. This process featured one-step fermentation of inulin without further hydrolyzing, which greatly decreased the raw material cost and thus facilitated its practical application. PMID:20452206

  18. Initial Ares I Bending Filter Design

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark

    2007-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.

  19. Multilevel Ensemble Transform Particle Filtering

    NASA Astrophysics Data System (ADS)

    Gregory, Alastair; Cotter, Colin; Reich, Sebastian

    2016-04-01

    This presentation extends the Multilevel Monte Carlo variance reduction technique to nonlinear filtering. In particular, Multilevel Monte Carlo is applied to a certain variant of the particle filter, the Ensemble Transform Particle Filter (ETPF). A key aspect is the use of optimal transport methods to re-establish correlation between coarse and fine ensembles after resampling; this controls the variance of the estimator. Numerical examples present a proof of concept of the effectiveness of the proposed method, demonstrating significant computational cost reductions (relative to the single-level ETPF counterpart) in the propagation of ensembles.

  20. Testing Dual Rotary Filters - 12373

    SciTech Connect

    Herman, D.T.; Fowley, M.D.; Stefanko, D.B.; Shedd, D.A.; Houchens, C.L.

    2012-07-01

    The Savannah River National Laboratory (SRNL) installed and tested two hydraulically connected SpinTek{sup R} Rotary Micro-filter units to determine the behavior of a multiple filter system and develop a multi-filter automated control scheme. Developing and testing the control of multiple filters was the next step in the development of the rotary filter for deployment. The test stand was assembled using as much of the hardware planned for use in the field including instrumentation and valving. The control scheme developed will serve as the basis for the scheme used in deployment. The multi filter setup was controlled via an Emerson DeltaV control system running version 10.3 software. Emerson model MD controllers were installed to run the control algorithms developed during this test. Savannah River Remediation (SRR) Process Control Engineering personnel developed the software used to operate the process test model. While a variety of control schemes were tested, two primary algorithms provided extremely stable control as well as significant resistance to process upsets that could lead to equipment interlock conditions. The control system was tuned to provide satisfactory response to changing conditions during the operation of the multi-filter system. Stability was maintained through the startup and shutdown of one of the filter units while the second was still in operation. The equipment selected for deployment, including the concentrate discharge control valve, the pressure transmitters, and flow meters, performed well. Automation of the valve control integrated well with the control scheme and when used in concert with the other control variables, allowed automated control of the dual rotary filter system. Experience acquired on a multi-filter system behavior and with the system layout during this test helped to identify areas where the current deployment rotary filter installation design could be improved. Completion of this testing provides the necessary

  1. Optimization of pressurized liquid extraction using a multivariate chemometric approach and comparison of solid-phase extraction cleanup steps for the determination of polycyclic aromatic hydrocarbons in mosses.

    PubMed

    Foan, L; Simon, V

    2012-09-21

    A factorial design was used to optimize the extraction of polycyclic aromatic hydrocarbons (PAHs) from mosses, plants used as biomonitors of air pollution. The analytical procedure consists of pressurized liquid extraction (PLE) followed by solid-phase extraction (SPE) cleanup, in association with analysis by high performance liquid chromatography coupled with fluorescence detection (HPLC-FLD). For method development, homogeneous samples were prepared with large quantities of the mosses Isothecium myosuroides Brid. and Hypnum cupressiforme Hedw., collected from a Spanish Nature Reserve. A factorial design was used to identify the optimal PLE operational conditions: 2 static cycles of 5 min at 80 °C. The analytical procedure performed with PLE showed similar recoveries (∼70%) and total PAH concentrations (∼200 ng g(-1)) as found using Soxtec extraction, with the advantage of reducing solvent consumption by 3 (30 mL against 100mL per sample), and taking a fifth of the time (24 samples extracted automatically in 8h against 2 samples in 3.5h). The performance of SPE normal phases (NH(2), Florisil, silica and activated aluminium) generally used for organic matrix cleanup was also compared. Florisil appeared to be the most selective phase and ensured the highest PAH recoveries. The optimal analytical procedure was validated with a reference material and applied to moss samples from a remote Spanish site in order to determine spatial and inter-species variability. PMID:22885040

  2. Signal interference RF photonic bandstop filter.

    PubMed

    Aryanfar, Iman; Choudhary, Amol; Shahnia, Shayan; Pagani, Mattia; Liu, Yang; Marpaung, David; Eggleton, Benjamin J

    2016-06-27

    In the microwave domain, signal interference bandstop filters with high extinction and wide stopbands are achieved through destructive interference of two signals. Implementation of this filtering concept using RF photonics will lead to unique filters with high performance, enhanced tuning range and reconfigurability. Here we demonstrate an RF photonic signal interference filter, achieved through the combination of precise synthesis of stimulated Brillouin scattering (SBS) loss with advanced phase and amplitude tailoring of RF modulation sidebands. We achieve a square-shaped, 20-dB extinction RF photonic filter over a tunable bandwidth of up to 1 GHz with a central frequency tuning range of 16 GHz using a low SBS loss of ~3 dB. Wideband destructive interference in this novel filter leads to the decoupling of the filter suppression from its bandwidth and shape factor. This allows the creation of a filter with all-optimized qualities. PMID:27410650

  3. Holographic photopolymer linear variable filter with enhanced blue reflection.

    PubMed

    Moein, Tania; Ji, Dengxin; Zeng, Xie; Liu, Ke; Gan, Qiaoqiang; Cartwright, Alexander N

    2014-03-12

    A single beam one-step holographic interferometry method was developed to fabricate porous polymer structures with controllable pore size and location to produce compact graded photonic bandgap structures for linear variable optical filters. This technology is based on holographic polymer dispersed liquid crystal materials. By introducing a forced internal reflection, the optical reflection throughout the visible spectral region, from blue to red, is high and uniform. In addition, the control of the bandwidth of the reflection resonance, related to the light intensity and spatial porosity distributions, was investigated to optimize the optical performance. The development of portable and inexpensive personal health-care and environmental multispectral sensing/imaging devices will be possible using these filters. PMID:24517443

  4. Microfabrication of three-dimensional filters for liposome extrusion

    NASA Astrophysics Data System (ADS)

    Baldacchini, Tommaso; Nuñez, Vicente; LaFratta, Christopher N.; Grech, Joseph S.; Vullev, Valentine I.; Zadoyan, Ruben

    2015-03-01

    Liposomes play a relevant role in the biomedical field of drug delivery. The ability of these lipid vesicles to encapsulate and transport a variety of bioactive molecules has fostered their use in several therapeutic applications, from cancer treatments to the administration of drugs with antiviral activities. Size and uniformity are key parameters to take into consideration when preparing liposomes; these factors greatly influence their effectiveness in both in vitro and in vivo experiments. A popular technique employed to achieve the optimal liposome dimension (around 100 nm in diameter) and uniform size distribution is repetitive extrusion through a polycarbonate filter. We investigated two femtosecond laser direct writing techniques for the fabrication of three-dimensional filters within a microfluidics chip for liposomes extrusion. The miniaturization of the extrusion process in a microfluidic system is the first step toward a complete solution for lab-on-a-chip preparation of liposomes from vesicles self-assembly to optical characterization.

  5. Holographic Photopolymer Linear Variable Filter with Enhanced Blue Reflection

    PubMed Central

    2015-01-01

    A single beam one-step holographic interferometry method was developed to fabricate porous polymer structures with controllable pore size and location to produce compact graded photonic bandgap structures for linear variable optical filters. This technology is based on holographic polymer dispersed liquid crystal materials. By introducing a forced internal reflection, the optical reflection throughout the visible spectral region, from blue to red, is high and uniform. In addition, the control of the bandwidth of the reflection resonance, related to the light intensity and spatial porosity distributions, was investigated to optimize the optical performance. The development of portable and inexpensive personal health-care and environmental multispectral sensing/imaging devices will be possible using these filters. PMID:24517443

  6. Water Filter

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A compact, lightweight electrolytic water sterilizer available through Ambassador Marketing, generates silver ions in concentrations of 50 to 100 parts per billion in water flow system. The silver ions serve as an effective bactericide/deodorizer. Tap water passes through filtering element of silver that has been chemically plated onto activated carbon. The silver inhibits bacterial growth and the activated carbon removes objectionable tastes and odors caused by addition of chlorine and other chemicals in municipal water supply. The three models available are a kitchen unit, a "Tourister" unit for portable use while traveling and a refrigerator unit that attaches to the ice cube water line. A filter will treat 5,000 to 10,000 gallons of water.

  7. Eyeglass Filters

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Biomedical Optical Company of America's suntiger lenses eliminate more than 99% of harmful light wavelengths. NASA derived lenses make scenes more vivid in color and also increase the wearer's visual acuity. Distant objects, even on hazy days, appear crisp and clear; mountains seem closer, glare is greatly reduced, clouds stand out. Daytime use protects the retina from bleaching in bright light, thus improving night vision. Filtering helps prevent a variety of eye disorders, in particular cataracts and age related macular degeneration.

  8. Stepped nozzle

    DOEpatents

    Sutton, G.P.

    1998-07-14

    An insert is described which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment. 5 figs.

  9. Stepped nozzle

    DOEpatents

    Sutton, George P.

    1998-01-01

    An insert which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment.

  10. Multilevel ensemble Kalman filtering

    DOE PAGESBeta

    Hoel, Hakon; Law, Kody J. H.; Tempone, Raul

    2016-06-14

    This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.

  11. Imaging task-based optimal kV and mA selection for CT radiation dose reduction: from filtered backprojection (FBP) to statistical model based iterative reconstruction (MBIR)

    NASA Astrophysics Data System (ADS)

    Li, Ke; Gomez-Cardona, Daniel; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2015-03-01

    Optimal selections of tube potential (kV) and tube current (mA) are essential in maximizing the diagnostic potential of a given CT technology while minimizing radiation dose. The use of a lower tube potential may improve image contrast, but may also require a significantly higher tube current to compensate for the rapid decrease of tube output at lower tube potentials. Therefore, the selection of kV and mA should take those kinds of constraints as well as the specific diagnostic imaging task in to consideration. For conventional quasi-linear CT systems employing linear filtered back-projection (FBP) image reconstruction algorithm, the optimization of kV-mA combinations are relatively straightforward, as neither spatial resolution nor noise texture has significant dependence on kV and mA settings. In these cases, zero-frequency analysis such as contrast-to-noise ratio (CNR) or normalized CNR by dose (CNRD) can be used for optimal kV-mA selection. The recently introduced statistical model-based iterative reconstruction (MBIR) method, however, has introduced new challenges to optimal kV and mA selection, as both spatial resolution and noise texture become closely correlated with kV and mA. In this work, a task-based approach based on modern signal detection theory and the corresponding frequency-dependent analysis has been proposed to perform the kV and mA optimization for both FBP and MBIR. By performing exhaustive measurements of task-based detectability index through the technically accessible kV-mA parameter space, iso-detectability contours were generated and overlaid on top of iso-dose contours, from which the kV-mA pair that minimize dose yet still achieving the desired detectability level can be identified.

  12. Next Step toward Optimization of GRP Receptor Avidities: Determination of the Minimal Distance between BBN(7-14) Units in Peptide Homodimers.

    PubMed

    Fischer, G; Lindner, S; Litau, S; Schirrmacher, R; Wängler, B; Wängler, C

    2015-08-19

    As the gastrin releasing peptide receptor (GRPR) is overexpressed on several tumor types, it represents a promising target for the specific in vivo imaging of these tumors using positron emission tomography (PET). We were able to show that PESIN-based peptide multimers can result in substantially higher GRPR avidities, highly advantageous in vivo pharmacokinetics and tumor imaging properties compared to the respective monomers. However, the minimal distance between the peptidic binders, resulting in the lowest possible system entropy while enabling a concomitant GRPR binding and thus optimized receptor avidities, has not been determined so far. Thus, we aimed here to identify the minimal distance between two GRPR-binding peptides in order to provide the basis for the development of highly avid GRPR-specific PET imaging agents. We therefore synthesized dimers of the GRPR-binding bombesin analogue BBN(7-14) on a dendritic scaffold, exhibiting different distances between both peptide binders. The homodimers were further modified with the chelator NODAGA, radiolabeled with (68)Ga, and evaluated in vitro regarding their GRPR avidity. We found that the most potent of the newly developed radioligands exhibits GRPR avidity twice as high as the most potent reference compound known so far, and that a minimal distance of 62 bond lengths between both peptidic binders within the homodimer can result in concomitant peptide binding and optimal GRPR avidities. These findings answer the question as to what molecular design should be chosen when aiming at the development of highly avid homobivalent peptidic ligands addressing the GRPR. PMID:26200324

  13. The Lockheed alternate partial polarizer universal filter

    NASA Technical Reports Server (NTRS)

    Title, A. M.

    1976-01-01

    A tunable birefringent filter using an alternate partial polarizer design has been built. The filter has a transmission of 38% in polarized light. Its full width at half maximum is .09A at 5500A. It is tunable from 4500 to 8500A by means of stepping motor actuated rotating half wave plates and polarizers. Wave length commands and thermal compensation commands are generated by a PPD 11/10 minicomputer. The alternate partial polarizer universal filter is compared with the universal birefringent filter and the design techniques, construction methods, and filter performance are discussed in some detail. Based on the experience of this filter some conclusions regarding the future of birefringent filters are elaborated.

  14. Nonlinear filtering in oil/gas reservoir simulation: filter design

    SciTech Connect

    Arnold, E.M.; Voss, D.A.; Mayer, D.W.

    1980-10-01

    In order to provide an additional mode of utility to the USGS reservoir model VARGOW, a nonlinear filter was designed and incorporated into the system. As a result, optimal (in the least squares sense) estimates of reservoir pressure, liquid mass, and gas cap plus free gas mass are obtained from an input of reservoir initial condition estimates and pressure history. These optimal estimates are provided continuously for each time after the initial time, and the input pressure history is allowed to be corrupted by measurement error. Preliminary testing of the VARGOW filter was begun and the results show promise. Synthetic data which could be readily manipulated during testing was used in tracking tests. The results were positive when the initial estimates of the reservoir initial conditions were reasonably close. Further testing is necessary to investigate the filter performance with real reservoir data.

  15. Solution of two-dimensional electromagnetic scattering problem by FDTD with optimal step size, based on a semi-norm analysis

    SciTech Connect

    Monsefi, Farid; Carlsson, Linus; Silvestrov, Sergei; Rančić, Milica; Otterskog, Magnus

    2014-12-10

    To solve the electromagnetic scattering problem in two dimensions, the Finite Difference Time Domain (FDTD) method is used. The order of convergence of the FDTD algorithm, solving the two-dimensional Maxwell’s curl equations, is estimated in two different computer implementations: with and without an obstacle in the numerical domain of the FDTD scheme. This constitutes an electromagnetic scattering problem where a lumped sinusoidal current source, as a source of electromagnetic radiation, is included inside the boundary. Confined within the boundary, a specific kind of Absorbing Boundary Condition (ABC) is chosen and the outside of the boundary is in form of a Perfect Electric Conducting (PEC) surface. Inserted in the computer implementation, a semi-norm has been applied to compare different step sizes in the FDTD scheme. First, the domain of the problem is chosen to be the free-space without any obstacles. In the second part of the computer implementations, a PEC surface is included as the obstacle. The numerical instability of the algorithms can be rather easily avoided with respect to the Courant stability condition, which is frequently used in applying the general FDTD algorithm.

  16. The optimization of essential oils supercritical CO2 extraction from Lavandula hybrida through static-dynamic steps procedure and semi-continuous technique using response surface method

    PubMed Central

    Kamali, Hossein; Aminimoghadamfarouj, Noushin; Golmakani, Ebrahim; Nematollahi, Alireza

    2015-01-01

    Aim: The aim of this study was to examine and evaluate crucial variables in essential oils extraction process from Lavandula hybrida through static-dynamic and semi-continuous techniques using response surface method. Materials and Methods: Essential oil components were extracted from Lavandula hybrida (Lavandin) flowers using supercritical carbon dioxide via static-dynamic steps (SDS) procedure, and semi-continuous (SC) technique. Results: Using response surface method the optimum extraction yield (4.768%) was obtained via SDS at 108.7 bar, 48.5°C, 120 min (static: 8×15), 24 min (dynamic: 8×3 min) in contrast to the 4.620% extraction yield for the SC at 111.6 bar, 49.2°C, 14 min (static), 121.1 min (dynamic). Conclusion: The results indicated that a substantial reduction (81.56%) solvent usage (kg CO2/g oil) is observed in the SDS method versus the conventional SC method. PMID:25598636

  17. Solution of two-dimensional electromagnetic scattering problem by FDTD with optimal step size, based on a semi-norm analysis

    NASA Astrophysics Data System (ADS)

    Monsefi, Farid; Carlsson, Linus; Rančić, Milica; Otterskog, Magnus; Silvestrov, Sergei

    2014-12-01

    To solve the electromagnetic scattering problem in two dimensions, the Finite Difference Time Domain (FDTD) method is used. The order of convergence of the FDTD algorithm, solving the two-dimensional Maxwell's curl equations, is estimated in two different computer implementations: with and without an obstacle in the numerical domain of the FDTD scheme. This constitutes an electromagnetic scattering problem where a lumped sinusoidal current source, as a source of electromagnetic radiation, is included inside the boundary. Confined within the boundary, a specific kind of Absorbing Boundary Condition (ABC) is chosen and the outside of the boundary is in form of a Perfect Electric Conducting (PEC) surface. Inserted in the computer implementation, a semi-norm has been applied to compare different step sizes in the FDTD scheme. First, the domain of the problem is chosen to be the free-space without any obstacles. In the second part of the computer implementations, a PEC surface is included as the obstacle. The numerical instability of the algorithms can be rather easily avoided with respect to the Courant stability condition, which is frequently used in applying the general FDTD algorithm.

  18. Ceramic filters

    SciTech Connect

    Holmes, B.L.; Janney, M.A.

    1995-12-31

    Filters were formed from ceramic fibers, organic fibers, and a ceramic bond phase using a papermaking technique. The distribution of particulate ceramic bond phase was determined using a model silicon carbide system. As the ceramic fiber increased in length and diameter the distance between particles decreased. The calculated number of particles per area showed good agreement with the observed value. After firing, the papers were characterized using a biaxial load test. The strength of papers was proportional to the amount of bond phase included in the paper. All samples exhibited strain-tolerant behavior.

  19. Sub-wavelength efficient polarization filter (SWEP filter)

    DOEpatents

    Simpson, Marcus L.; Simpson, John T.

    2003-12-09

    A polarization sensitive filter includes a first sub-wavelength resonant grating structure (SWS) for receiving incident light, and a second SWS. The SWS are disposed relative to one another such that incident light which is transmitted by the first SWS passes through the second SWS. The filter has a polarization sensitive resonance, the polarization sensitive resonance substantially reflecting a first polarization component of incident light while substantially transmitting a second polarization component of the incident light, the polarization components being orthogonal to one another. A method for forming polarization filters includes the steps of forming first and second SWS, the first and second SWS disposed relative to one another such that a portion of incident light applied to the first SWS passes through the second SWS. A method for separating polarizations of light, includes the steps of providing a filter formed from a first and second SWS, shining incident light having orthogonal polarization components on the first SWS, and substantially reflecting one of the orthogonal polarization components while substantially transmitting the other orthogonal polarization component. A high Q narrowband filter includes a first and second SWS, the first and second SWS are spaced apart a distance being at least one half an optical wavelength.

  20. Preparation of Prussian Blue Submicron Particles with a Pore Structure by Two-Step Optimization for Na-Ion Battery Cathodes.

    PubMed

    Chen, Renjie; Huang, Yongxin; Xie, Man; Zhang, Qianyun; Zhang, XiaoXiao; Li, Li; Wu, Feng

    2016-06-29

    Traditional Prussian blue (Fe4[Fe(CN)6]3) synthesized by simple rapid precipitation shows poor electrochemical performance because of the presence of vacancies occupied by coordinated water. When the precipitation rate is reduced and polyvinylpyrrolidone K-30 is added as a surface active agent, the as-prepared Prussian blue has fewer vacancies in the crystal structure than in that of traditional Prussian blue. It has a well-defined face-centered-cubic structure, which can provide large channels for Na(+) insertion/extraction. The material, synthesized by slow precipitation, has an initial discharge capacity of 113 mA h g(-1) and maintains 93 mA h g(-1) under a current density of 50 mA g(-1) after 150 charge-discharge cycles. After further optimization by a chemical etching method, the complex nanoporous structure of Prussian blue has a high Brunauer-Emmett-Teller surface area and a stable structure to achieve high specific capacity and long cycle life. Surprisingly, the electrode shows an initial discharge capacity of 115 mA h g(-1) and a Coulombic efficiency of approximately 100% with capacity retention of 96% after 150 cycles. Experimental results show that Prussian blue can also be used as a cathode for Na-ion batteries. PMID:27267656

  1. Optimization of the activation and nucleation steps in the precipitation of a calcium phosphate primer layer on electrospun poly(ɛ-caprolactone).

    PubMed

    Luickx, Nathalie; Van den Vreken, Natasja; D'Oosterlinck, Willem; Van der Schueren, Lien; Declercq, Heidi; De Clerck, Karen; Cornelissen, Maria; Verbeeck, Ronald

    2015-02-01

    The present study aimed to optimize the procedure for coating electrospun poly(ε-caprolactone) (PCL) fibers with a calcium phosphate (CP) layer in order to improve their potential as bone tissue engineering scaffold. In particular, attention was paid to the reproducibility of the procedure, the morphology of the coating, and the preservation of the porous structure of the scaffold. Ethanol dipping followed by an ultrasonic assisted hydrolysis of the fiber surface with sodium hydroxide solution efficiently activated the surface. The resulting reactive groups served as nucleation points for CP precipitation, induced by alternate dipping of the samples in calcium and phosphate rich solutions. By controlling the deposition, a reproducible thin layer of CP was grown onto the fiber surface. The deposited CP was identified as calcium-deficient apatite (CDHAp). Analysis of the cell viability, adhesion, and proliferation of MC3T3-E1 cells on untreated and CDHAp coated PCL scaffolds showed that the CDHAp coating enhanced the cell response, as the number of attached cells was higher in comparison to the untreated PCL and cells on the CDHAp coated samples showed similar morphologies as the ones found in the positive control. PMID:24733786

  2. First-moment filters for spatial independent cluster processes

    NASA Astrophysics Data System (ADS)

    Swain, Anthony; Clark, Daniel E.

    2010-04-01

    A group target is a collection of individual targets which are, for example, part of a convoy of articulated vehicles or a crowd of football supporters and can be represented mathematically as a spatial cluster process. The process of detecting, tracking and identifying group targets requires the estimation of the evolution of such a dynamic spatial cluster process in time based on a sequence of partial observation sets. A suitable generalisation of the Bayes filter for this system would provide us with an optimal (but computationally intractable) estimate of a multi-group multi-object state based on measurements received up to the current time-step. In this paper, we derive the first-moment approximation of the multi-group multi-target Bayes filter, inspired by the first-moment multi-object Bayes filter derived by Mahler. Such approximations are Bayes optimal and provide estimates for the number of clusters (groups) and their positions in the group state-space, as well as estimates for the number of cluster components (object targets) and their positions in target state-space.

  3. TU-C-BRE-11: 3D EPID-Based in Vivo Dosimetry: A Major Step Forward Towards Optimal Quality and Safety in Radiation Oncology Practice

    SciTech Connect

    Mijnheer, B; Mans, A; Olaciregui-Ruiz, I; Rozendaal, R; Spreeuw, H; Herk, M van

    2014-06-15

    Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is done offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.

  4. Nonlinear Filtering with Fractional Brownian Motion

    SciTech Connect

    Amirdjanova, A.

    2002-12-19

    Our objective is to study a nonlinear filtering problem for the observation process perturbed by a Fractional Brownian Motion (FBM) with Hurst index 1/2 optimal filter is derived.

  5. Polynomial distance classifier correlation filter for pattern recognition.

    PubMed

    Alkanhal, Mohamed; Vijaya Kumar, B V K

    2003-08-10

    We introduce what is to our knowledge a new nonlinear shift-invariant classifier called the polynomial distance classifier correlation filter (PDCCF). The underlying theory extends the original linear distance classifier correlation filter [Appl. Opt. 35, 3127 (1996)] to include nonlinear functions of the input pattern. This new filter provides a framework (for combining different classification filters) that takes advantage of the individual filter strengths. In this new filter design, all filters are optimized jointly. We demonstrate the advantage of the new PDCCF method using simulated and real multi-class synthetic aperture radar images. PMID:13678355

  6. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  7. Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

    NASA Astrophysics Data System (ADS)

    Bruno, Marcelo G. S.; Dias, Stiven S.

    2014-12-01

    We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

  8. Organic solvent-free air-assisted liquid-liquid microextraction for optimized extraction of illegal azo-based dyes and their main metabolite from spices, cosmetics and human bio-fluid samples in one step.

    PubMed

    Barfi, Behruz; Asghari, Alireza; Rajabi, Maryam; Sabzalian, Sedigheh

    2015-08-15

    Air-assisted liquid-liquid microextraction (AALLME) has unique capabilities to develop as an organic solvent-free and one-step microextraction method, applying ionic-liquids as extraction solvent and avoiding centrifugation step. Herein, a novel and simple eco-friendly method, termed one-step air-assisted liquid-liquid microextraction (OS-AALLME), was developed to extract some illegal azo-based dyes (including Sudan I to IV, and Orange G) from food and cosmetic products. A series of experiments were investigated to achieve the most favorable conditions (including extraction solvent: 77μL of 1-Hexyl-3-methylimidazolium hexafluorophosphate; sample pH 6.3, without salt addition; and extraction cycles: 25 during 100s of sonication) using a central composite design strategy. Under these conditions, limits of detection, linear dynamic ranges, enrichment factors and consumptive indices were in the range of 3.9-84.8ngmL(-1), 0.013-3.1μgmL(-1), 33-39, and 0.13-0.15, respectively. The results showed that -as well as its simplicity, fastness, and use of no hazardous disperser and extraction solvents- OS-AALLME is an enough sensitive and efficient method for the extraction of these dyes from complex matrices. After optimization and validation, OS-AALLME was applied to estimate the concentration of 1-amino-2-naphthol in human bio-fluids as a main reductive metabolite of selected dyes. Levels of 1-amino-2-naphthol in plasma and urinary excretion suggested that this compound may be used as a new potential biomarker of these dyes in human body. PMID:26149246

  9. Sticky steps inhibit step motions near equilibrium

    NASA Astrophysics Data System (ADS)

    Akutsu, Noriko

    2012-12-01

    Using a Monte Carlo method on a lattice model of a vicinal surface with a point-contact-type step-step attraction, we show that, at low temperature and near equilibrium, there is an inhibition of the motion of macrosteps. This inhibition leads to a pinning of steps without defects, adsorbates, or impurities (self-pinning of steps). We show that this inhibition of the macrostep motion is caused by faceted steps, which are macrosteps that have a smooth side surface. The faceted steps result from discontinuities in the anisotropic surface tension (the surface free energy per area). The discontinuities are brought into the surface tension by the point-contact-type step-step attraction. The point-contact-type step-step attraction also originates “step droplets,” which are locally merged steps, at higher temperatures. We derive an analytic equation of the surface stiffness tensor for the vicinal surface around the (001) surface. Using the surface stiffness tensor, we show that step droplets roughen the vicinal surface. Contrary to what we expected, the step droplets slow down the step velocity due to the diminishment of kinks in the merged steps (smoothing of the merged steps).

  10. Generating an optimal DTM from airborne laser scanning data for landslide mapping in a tropical forest environment

    NASA Astrophysics Data System (ADS)

    Razak, Khamarrul Azahari; Santangelo, Michele; Van Westen, Cees J.; Straatsma, Menno W.; de Jong, Steven M.

    2013-05-01

    Landslide inventory maps are fundamental for assessing landslide susceptibility, hazard, and risk. In tropical mountainous environments, mapping landslides is difficult as rapid and dense vegetation growth obscures landslides soon after their occurrence. Airborne laser scanning (ALS) data have been used to construct the digital terrain model (DTM) under dense vegetation, but its reliability for landslide recognition in the tropics remains surprisingly unknown. This study evaluates the suitability of ALS for generating an optimal DTM for mapping landslides in the Cameron Highlands, Malaysia. For the bare-earth extraction, we used hierarchical robust filtering algorithm and a parameterization with three sequential filtering steps. After each filtering step, four interpolations techniques were applied, namely: (i) the linear prediction derived from the SCOP++ (SCP), (ii) the inverse distance weighting (IDW), (iii) the natural neighbor (NEN) and (iv) the topo-to-raster (T2R). We assessed the quality of 12 DTMs in two ways: (1) with respect to 448 field-measured terrain heights and (2) based on the interpretability of landslides. The lowest root-mean-square error (RMSE) was 0.89 m across the landscape using three filtering steps and linear prediction as interpolation method. However, we found that a less stringent DTM filtering unveiled more diagnostic micro-morphological features, but also retained some of vegetation. Hence, a combination of filtering steps is required for optimal landslide interpretation, especially in forested mountainous areas. IDW was favored as the interpolation technique because it combined computational times more reasonably without adding artifacts to the DTM than T2R and NEN, which performed relatively well in the first and second filtering steps, respectively. The laser point density and the resulting ground point density after filtering are key parameters for producing a DTM applicable to landslide identification. The results showed that the

  11. ADVANCED HOT GAS FILTER DEVELOPMENT

    SciTech Connect

    E.S. Connolly; G.D. Forsythe

    2000-09-30

    DuPont Lanxide Composites, Inc. undertook a sixty-month program, under DOE Contract DEAC21-94MC31214, in order to develop hot gas candle filters from a patented material technology know as PRD-66. The goal of this program was to extend the development of this material as a filter element and fully assess the capability of this technology to meet the needs of Pressurized Fluidized Bed Combustion (PFBC) and Integrated Gasification Combined Cycle (IGCC) power generation systems at commercial scale. The principal objective of Task 3 was to build on the initial PRD-66 filter development, optimize its structure, and evaluate basic material properties relevant to the hot gas filter application. Initially, this consisted of an evaluation of an advanced filament-wound core structure that had been designed to produce an effective bulk filter underneath the barrier filter formed by the outer membrane. The basic material properties to be evaluated (as established by the DOE/METC materials working group) would include mechanical, thermal, and fracture toughness parameters for both new and used material, for the purpose of building a material database consistent with what is being done for the alternative candle filter systems. Task 3 was later expanded to include analysis of PRD-66 candle filters, which had been exposed to actual PFBC conditions, development of an improved membrane, and installation of equipment necessary for the processing of a modified composition. Task 4 would address essential technical issues involving the scale-up of PRD-66 candle filter manufacturing from prototype production to commercial scale manufacturing. The focus would be on capacity (as it affects the ability to deliver commercial order quantities), process specification (as it affects yields, quality, and costs), and manufacturing systems (e.g. QA/QC, materials handling, parts flow, and cost data acquisition). Any filters fabricated during this task would be used for product qualification tests

  12. SU-E-I-62: Assessing Radiation Dose Reduction and CT Image Optimization Through the Measurement and Analysis of the Detector Quantum Efficiency (DQE) of CT Images Using Different Beam Hardening Filters

    SciTech Connect

    Collier, J; Aldoohan, S; Gill, K

    2014-06-01

    Purpose: Reducing patient dose while maintaining (or even improving) image quality is one of the foremost goals in CT imaging. To this end, we consider the feasibility of optimizing CT scan protocols in conjunction with the application of different beam-hardening filtrations and assess this augmentation through noise-power spectrum (NPS) and detector quantum efficiency (DQE) analysis. Methods: American College of Radiology (ACR) and Catphan phantoms (The Phantom Laboratory) were scanned with a 64 slice CT scanner when additional filtration of thickness and composition (e.g., copper, nickel, tantalum, titanium, and tungsten) had been applied. A MATLAB-based code was employed to calculate the image of noise NPS. The Catphan Image Owl software suite was then used to compute the modulated transfer function (MTF) responses of the scanner. The DQE for each additional filter, including the inherent filtration, was then computed from these values. Finally, CT dose index (CTDIvol) values were obtained for each applied filtration through the use of a 100 mm pencil ionization chamber and CT dose phantom. Results: NPS, MTF, and DQE values were computed for each applied filtration and compared to the reference case of inherent beam-hardening filtration only. Results showed that the NPS values were reduced between 5 and 12% compared to inherent filtration case. Additionally, CTDIvol values were reduced between 15 and 27% depending on the composition of filtration applied. However, no noticeable changes in image contrast-to-noise ratios were noted. Conclusion: The reduction in the quanta noise section of the NPS profile found in this phantom-based study is encouraging. The reduction in both noise and dose through the application of beam-hardening filters is reflected in our phantom image quality. However, further investigation is needed to ascertain the applicability of this approach to reducing patient dose while maintaining diagnostically acceptable image qualities in a

  13. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan; McGreer, Ian D.; Strauss, Michael A.; Annis, James; Buck, Zoë; Green, Richard; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-06-25

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).

  14. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.; Green, Richard; Bian, Fuyan; Strauss, Michael A.; Buck, Zoë; Annis, James; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)

  15. Optimization of a coupled hydrology-crop growth model through the assimilation of observed soil moisture and leaf area index values using an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Pauwels, Valentijn R. N.; Verhoest, Niko E. C.; de Lannoy, GabriëLle J. M.; Guissard, Vincent; Lucau, Cozmin; Defourny, Pierre

    2007-04-01

    It is well known that the presence and development stage of vegetation largely influences the soil moisture content. In its turn, soil moisture availability is of major importance for the development of vegetation. The objective of this paper is to assess to what extent the results of a fully coupled hydrology-crop growth model can be optimized through the assimilation of observed leaf area index (LAI) or soil moisture values. For this purpose the crop growth module of the World Food Studies (WOFOST) model has been coupled to a fully process based water and energy balance model (TOPMODEL-Based Land-Atmosphere Transfer Scheme (TOPLATS)). LAI and soil moisture observations from 18 fields in the loamy region in the central part of Belgium have been used to thoroughly validate the coupled model. An observing system simulation experiment (OSSE) has been performed in order to assess whether soil moisture and LAI observations with realistic uncertainties are useful for data assimilation purposes. Under realistic conditions (biweekly observations with a noise level of 5 volumetric percent for soil moisture and 0.5 for LAI) an improvement in the model results can be expected. The results show that the modeled LAI values are not sensitive to the assimilation of soil moisture values before the initiation of crop growth. Also, the modeled soil moisture profile does not necessarily improve through the assimilation of LAI values during the growing season. In order to improve both the vegetation and soil moisture state of the model, observations of both variables need to be assimilated.

  16. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-optimized Co-adds over 300 deg2 in Five Filters

    NASA Astrophysics Data System (ADS)

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan; McGreer, Ian D.; Strauss, Michael A.; Annis, James; Buck, Zoë; Green, Richard; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg2 on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg2 of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).

  17. Adaptive particle filtering

    NASA Astrophysics Data System (ADS)

    Stevens, Mark R.; Gutchess, Dan; Checka, Neal; Snorrason, Magnús

    2006-05-01

    Image exploitation algorithms for Intelligence, Surveillance and Reconnaissance (ISR) and weapon systems are extremely sensitive to differences between the operating conditions (OCs) under which they are trained and the extended operating conditions (EOCs) in which the fielded algorithms are tested. As an example, terrain type is an important OC for the problem of tracking hostile vehicles from an airborne camera. A system designed to track cars driving on highways and on major city streets would probably not do well in the EOC of parking lots because of the very different dynamics. In this paper, we present a system we call ALPS for Adaptive Learning in Particle Systems. ALPS takes as input a sequence of video images and produces labeled tracks. The system detects moving targets and tracks those targets across multiple frames using a multiple hypothesis tracker (MHT) tightly coupled with a particle filter. This tracker exploits the strengths of traditional MHT based tracking algorithms by directly incorporating tree-based hypothesis considerations into the particle filter update and resampling steps. We demonstrate results in a parking lot domain tracking objects through occlusions and object interactions.

  18. Analytic study on the effects of the number of MLC segments and the least segment area on step-and-shoot head-and-neck IMRT planning using direct machine parameter optimization

    NASA Astrophysics Data System (ADS)

    Cheong, Kwang-Ho; Kang, Sei-Kwon; Lee, MeYeon; Kim, Haeyoung; Bae, Hoonsik; Park, SoAh; Hwang, Taejin; Kim, KyoungJu; Han, Taejin

    2013-05-01

    In this study, we present the concurrent effects of the number of segments (NS) and the least segment area (LSA) for step-and-shoot head-and-neck intensity-modulated radiation therapy (IMRT) planning using the direct machine parameter optimization (DMPO), on which basis we suggest the optimal NS and LSA ranges. We selected three head-and-neck patients who had received IMRT via the simultaneous integrated boost (SIB) technique and classified them as easy, intermediate, and difficult cases. We formulated a benchmark plan and made 11 additional plans by re-optimizing the benchmark by varying the NS and the LSA for each case. Clinical and physical plan-quality evaluation parameters were considered separately: the conformality index (CI), the homogeneity index (HI) and the maximum or mean doses for the organs-at-risk were the clinical factors, and these were summarized as plan-quality parameter, Q. The modulation index (MI), the total monitor units (MUs), and the final composite cost function F were employed as parameters in the evaluation of the physical aspects. A 2-way analysis of variance (2-way ANOVA) was used to determine the effects of the NS and the LSA concurrently. Pearson's correlations among the total MU, MI, F, and Q were examined as well. Overall plan-efficiency factor ɛ was defined to estimate the optimal NS and LSA by considering the plan's quality and the beam delivery efficiency together. Plans with simple targets or a small number of beams (NB) were affected by the LSA whereas plans with complex targets or large NB were affected by the NS. Moreover, smaller NS and smaller LSA were advantageous for simple plans whereas larger NS and smaller LSA were beneficial for complex plans. When we consider the plan's quality and the beam delivery efficiency, {NS = 60-80, LSA = 8-12 cm2} are the proper ranges for head-and-neck IMRT planning with DMPO; however, the combination may differ based on the complexity of a given plan.

  19. Polarization filtering of SAR data

    NASA Technical Reports Server (NTRS)

    Dubois, Pascale C.; Van Zyl, Jakob J.

    1989-01-01

    A theoretical analysis of polarization filtering for the bistatic case is developed for optimum discrimination between two types of targets. The resulting method is half analytical and half numerical. Because it is based on the Stokes matrix representation, the targets of interest can be extended targets. The scattered field from such targets is partially polarized. This method is then applied to the monostatic case with numerical examples relying on the JPL (Jet Propulsion Laboratory) full-polarimetric L-band radar data. A matched filter to maximize the power ratio between urban and natural targets is developed. The results show that the same filter is optimal for both ocean and forest targets as natural targets.

  20. The use of filter media to determine filter cleanliness

    NASA Astrophysics Data System (ADS)

    Van Staden, S. J.; Haarhoff, J.

    It is general believed that a sand filter starts its life with new, perfectly clean media, which becomes gradually clogged with each filtration cycle, eventually getting to a point where either head loss or filtrate quality starts to deteriorate. At this point the backwash cycle is initiated and, through the combined action of air and water, returns the media to its original perfectly clean state. Reality, however, dictates otherwise. Many treatment plants visited a decade or more after commissioning are found to have unacceptably dirty filter sand and backwash systems incapable of returning the filter media to a desired state of cleanliness. In some cases, these problems are common ones encountered in filtration plants but many reasons for media deterioration remain elusive, falling outside of these common problems. The South African conditions of highly eutrophic surface waters at high temperatures, however, exacerbate the problems with dirty filter media. Such conditions often lead to the formation of biofilm in the filter media, which is shown to inhibit the effective backwashing of sand and carbon filters. A systematic investigation into filter media cleanliness was therefore started in 2002, ending in 2005, at the University of Johannesburg (the then Rand Afrikaans University). This involved media from eight South African Water Treatment Plants, varying between sand and sand-anthracite combinations and raw water types from eutrophic through turbid to low-turbidity waters. Five states of cleanliness and four fractions of specific deposit were identified relating to in situ washing, column washing, cylinder inversion and acid-immersion techniques. These were measured and the results compared to acceptable limits for specific deposit, as determined in previous studies, though expressed in kg/m 3. These values were used to determine the state of the filters. In order to gain greater insight into the composition of the specific deposits stripped from the media, a

  1. An online novel adaptive filter for denoising time series measurements.

    PubMed

    Willis, Andrew J

    2006-04-01

    A nonstationary form of the Wiener filter based on a principal components analysis is described for filtering time series data possibly derived from noisy instrumentation. The theory of the filter is developed, implementation details are presented and two examples are given. The filter operates online, approximating the maximum a posteriori optimal Bayes reconstruction of a signal with arbitrarily distributed and non stationary statistics. PMID:16649562

  2. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  3. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  4. Genetically Engineered Microelectronic Infrared Filters

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    1998-01-01

    A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.

  5. HEPA filter dissolution process

    SciTech Connect

    Brewer, K.N.; Murphy, J.A.

    1992-12-31

    This invention is comprised of a process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.

  6. Recirculating electric air filter

    DOEpatents

    Bergman, Werner

    1986-01-01

    An electric air filter cartridge has a cylindrical inner high voltage eleode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  7. Hepa filter dissolution process

    DOEpatents

    Brewer, Ken N.; Murphy, James A.

    1994-01-01

    A process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.

  8. HEPA filter dissolution process

    DOEpatents

    Brewer, K.N.; Murphy, J.A.

    1994-02-22

    A process is described for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal. 4 figures.

  9. Recirculating electric air filter

    DOEpatents

    Bergman, W.

    1985-01-09

    An electric air filter cartridge has a cylindrical inner high voltage electrode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  10. Metal-dielectric metameric filters for optically variable devices

    NASA Astrophysics Data System (ADS)

    Xiao, Lixiang; Chen, Nan; Deng, Zihao; Wang, Xiaozhong; Guo, Rong; Bu, Yikun

    2016-01-01

    A pair of metal-dielectric metameric filters that could create a hidden image was presented for the first time. The structure of the filters is simple and only six layers for filter A and five layers for filter B. The prototype filters were designed by using the film color target optimization method and the designed results show that, at normal observation angle, the reflected colors of the pair of filters are both green and the color difference index between them is only 0.9017. At observation angle of 60°, the filter A is violet and the filter B is blue. The filters were fabricated by remote plasma sputtering process and the experimental results were in accordance with the designs.

  11. Properties of multilayer filters

    NASA Technical Reports Server (NTRS)

    Baumeister, P. W.

    1973-01-01

    New methods were investigated of using optical interference coatings to produce bandpass filters for the spectral region 110 nm to 200 nm. The types of filter are: triple cavity metal dielectric filters; all dielectric reflection filters; and all dielectric Fabry Perot type filters. The latter two types use thorium fluoride and either cryolite films or magnesium fluoride films in the stacks. The optical properties of the thorium fluoride were also measured.

  12. Recent progress in plasmonic colour filters for image sensor and multispectral applications

    NASA Astrophysics Data System (ADS)

    Pinton, Nadia; Grant, James; Choubey, Bhaskar; Cumming, David; Collins, Steve

    2016-04-01

    Using nanostructured thin metal films as colour filters offers several important advantages, in particular high tunability across the entire visible spectrum and some of the infrared region, and also compatibility with conventional CMOS processes. Since 2003, the field of plasmonic colour filters has evolved rapidly and several different designs and materials, or combination of materials, have been proposed and studied. In this paper we present a simulation study for a single- step lithographically patterned multilayer structure able to provide competitive transmission efficiencies above 40% and contemporary FWHM of the order of 30 nm across the visible spectrum. The total thickness of the proposed filters is less than 200 nm and is constant for every wavelength, unlike e.g. resonant cavity-based filters such as Fabry-Perot that require a variable stack of several layers according to the working frequency, and their passband characteristics are entirely controlled by changing the lithographic pattern. It will also be shown that a key to obtaining narrow-band optical response lies in the dielectric environment of a nanostructure and that it is not necessary to have a symmetric structure to ensure good coupling between the SPPs at the top and bottom interfaces. Moreover, an analytical method to evaluate the periodicity, given a specific structure and a desirable working wavelength, will be proposed and its accuracy demonstrated. This method conveniently eliminate the need to optimize the design of a filter numerically, i.e. by running several time-consuming simulations with different periodicities.

  13. A Filtering Method For Gravitationally Stratified Flows

    SciTech Connect

    Gatti-Bono, Caroline; Colella, Phillip

    2005-04-25

    Gravity waves arise in gravitationally stratified compressible flows at low Mach and Froude numbers. These waves can have a negligible influence on the overall dynamics of the fluid but, for numerical methods where the acoustic waves are treated implicitly, they impose a significant restriction on the time step. A way to alleviate this restriction is to filter out the modes corresponding to the fastest gravity waves so that a larger time step can be used. This paper presents a filtering strategy of the fully compressible equations based on normal mode analysis that is used throughout the simulation to compute the fast dynamics and that is able to damp only fast gravity modes.

  14. Improvement of Bit Error Rate in Holographic Data Storage Using the Extended High-Frequency Enhancement Filter

    NASA Astrophysics Data System (ADS)

    Kim, Do-Hyung; Cho, Janghyun; Moon, Hyungbae; Jeon, Sungbin; Park, No-Cheol; Yang, Hyunseok; Park, Kyoung-Su; Park, Young-Pil

    2013-09-01

    Optimized image restoration is suggested in angular-multiplexing-page-based holographic data storage. To improve the bit error rate (BER), an extended high frequency enhancement filter is recalculated from the point spread function (PSF) and Gaussian mask as the image restoration filter. Using the extended image restoration filter, the proposed system reduces the number of processing steps compared with the image upscaling method and provides better performance in BER and SNR. Numerical simulations and experiments were performed to verify the proposed method. The proposed system exhibited a marked improvement in BER from 0.02 to 0.002 for a Nyquist factor of 1.1, and from 0.006 to 0 for a Nyquist factor of 1.2. Moreover, more than 3 times faster performance in calculation time was achieved compared with image restoration with PSF upscaling owing to the reductions in the number of system process and calculation load.

  15. Information Filtering Using Kullback-Leibler Divergence

    NASA Astrophysics Data System (ADS)

    Yanagimoto, Hidekazu; Omatu, Sigeru

    In this paper we describe an information filtering system using the Kullback-Leibler divergence. To cope with information flood, many information filtering systems have been proposed up to now. Since almost all information filtering systems are developed with techniques of information retrieval, machine learning, and pattern recognition, they often use a linear function as the discriminant function. To classify information in the field of document classification more precisely, the systems have been reported which use a non-linear function as the discriminant function. The proposed method is to use the Kullback-Leibler divergence as the discriminat function which denotes to user's interest in the information filtering system. To identify the optimal discriminat function with documents which a user evaluates, we decide the optimal function using the genetic algorithm. We compare the present method with the other one using a linear discriminant function and confirm the effectiveness of the proposed method.

  16. ARRANGEMENT FOR REPLACING FILTERS

    DOEpatents

    Blomgren, R.A.; Bohlin, N.J.C.

    1957-08-27

    An improved filtered air exhaust system which may be continually operated during the replacement of the filters without the escape of unfiltered air is described. This is accomplished by hermetically sealing the box like filter containers in a rectangular tunnel with neoprene covered sponge rubber sealing rings coated with a silicone impregnated pneumatic grease. The tunnel through which the filters are pushed is normal to the exhaust air duct. A number of unused filters are in line behind the filters in use, and are moved by a hydraulic ram so that a fresh filter is positioned in the air duct. The used filter is pushed into a waiting receptacle and is suitably disposed. This device permits a rapid and safe replacement of a radiation contaminated filter without interruption to the normal flow of exhaust air.

  17. Stochastic Vorticity and Associated Filtering Theory

    SciTech Connect

    Amirdjanova, A.; Kallianpur, G.

    2002-12-19

    The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.

  18. Laboratory comparison of continuous vs. binary phase-mostly filters

    NASA Technical Reports Server (NTRS)

    Monroe, Stanley E., Jr.; Knopp, Jerome; Juday, Richard D.

    1989-01-01

    Recent developments in spatial light modulators have led to devices which are capable of continuous phase modulation, even if only over a limited range. One of these devices, the deformable mirror device is used, to compare the relative merits of binary and partially-continuous phase filters in a specific problem of pattern recognition by optical correlation. Each filter was physically limited to only about a radiation of modulation. Researchers have predicted that for low input noise levels, continuous phase-only filters should have a higher absolute correlator peak output than the corresponding binary filters, as well as having a larger SNR. When continuous and binary filters were implemented on the DMD and they exhibited the same performance; an ad hoc filter optimization procedure was developed for use in the laboratory. The optimized continuous filter gave higher correlation peaks than did an independently optimized binary filter. Background behavior in the correlation plane was similar for the two filters, and thus the SNR showed the same improvement for the continuous filter. A phasor diagram analysis and computer simulation have explained part of the optimization procedure's success.

  19. Corrosion resistant filter unit

    SciTech Connect

    Gentry, J.M.

    1992-02-18

    This patent describes a fluid filter assembly adapted for the filtration of corrosive fluid to be injected into a well bore at pressure levels which may exceed 10,000 pounds per square. It comprises: a frame assembly for the mounting of a portion of the fluid filter assembly therein, the frame assembly; filter pods, the plurality of filter pods forming at least two banks of filter pods, each bank having at least two filter pods therein, each bank of the filter pods being supported by one or more the supports of the plurality of supports secured to selected struts of the frame assembly; an inlet manifold to direct the corrosive fluid to the plurality of filter pods, the inlet manifold being interconnected to the banks of filter pods formed by the filter pods whereby flow of the corrosive fluid can be directed to each bank of the filter pods; an outlet manifold to direct the corrosive fluid from the filter pods, the outlet manifold being interconnected to the banks of filter pods formed by the filter pods; a first valve means to control the flow of the corrosive fluid between banks of filter pods formed by the filter pods whereby the flow of the corrosive fluid can be selectively directed to each bank of the filter pods; a second valve means to selectively control the flow of the corrosive fluid between the inlet manifold and the outlet manifold; and union means for interconnecting the filter pods, inlet manifold and outlet manifold, each of the union means including mechanical connection means and internal seal means for isolating the corrosive fluids from the mechanical connection means.

  20. Filter Design With Secrecy Constraints: The MIMO Gaussian Wiretap Channel

    NASA Astrophysics Data System (ADS)

    Reboredo, Hugo; Xavier, Joao; Rodrigues, Miguel R. D.

    2013-08-01

    This paper considers the problem of filter design with secrecy constraints, where two legitimate parties (Alice and Bob) communicate in the presence of an eavesdropper (Eve), over a Gaussian multiple-input-multiple-output (MIMO) wiretap channel. This problem involves designing, subject to a power constraint, the transmit and the receive filters which minimize the mean-squared error (MSE) between the legitimate parties whilst assuring that the eavesdropper MSE remains above a certain threshold. We consider a general MIMO Gaussian wiretap scenario, where the legitimate receiver uses a linear Zero-Forcing (ZF) filter and the eavesdropper receiver uses either a ZF or an optimal linear Wiener filter. We provide a characterization of the optimal filter designs by demonstrating the convexity of the optimization problems. We also provide generalizations of the filter designs from the scenario where the channel state is known to all the parties to the scenario where there is uncertainty in the channel state. A set of numerical results illustrates the performance of the novel filter designs, including the robustness to channel modeling errors. In particular, we assess the efficacy of the designs in guaranteeing not only a certain MSE level at the eavesdropper, but also in limiting the error probability at the eavesdropper. We also assess the impact of the filter designs on the achievable secrecy rates. The penalty induced by the fact that the eavesdropper may use the optimal non-linear receive filter rather than the optimal linear one is also explored in the paper.

  1. Rigid porous filter

    DOEpatents

    Chiang, Ta-Kuan; Straub, Douglas L.; Dennis, Richard A.

    2000-01-01

    The present invention involves a porous rigid filter including a plurality of concentric filtration elements having internal flow passages and forming external flow passages there between. The present invention also involves a pressure vessel containing the filter for the removal of particulates from high pressure particulate containing gases, and further involves a method for using the filter to remove such particulates. The present filter has the advantage of requiring fewer filter elements due to the high surface area-to-volume ratio provided by the filter, requires a reduced pressure vessel size, and exhibits enhanced mechanical design properties, improved cleaning properties, configuration options, modularity and ease of fabrication.

  2. Robust depth filter sizing for centrate clarification.

    PubMed

    Lutz, Herb; Chefer, Kate; Felo, Michael; Cacace, Benjamin; Hove, Sarah; Wang, Bin; Blanchard, Mark; Oulundsen, George; Piper, Rob; Zhao, Xiaoyang

    2015-01-01

    Cellulosic depth filters embedded with diatomaceous earth are widely used to remove colloidal cell debris from centrate as a secondary clarification step during the harvest of mammalian cell culture fluid. The high cost associated with process failure in a GMP (Good Manufacturing Practice) environment highlights the need for a robust process scale depth filter sizing that allows for (1) stochastic batch-to-batch variations from filter media, bioreactor feed and operation, and (2) systematic scaling differences in average performance between filter sizes and formats. Matched-lot depth filter media tested at the same conditions with consecutive batches of the same molecule were used to assess the sources and magnitudes of process variability. Depth filter sizing safety factors of 1.2-1.6 allow a filtration process to compensate for random batch-to-batch process variations. Matched-lot depth filter media in four different devices tested simultaneously at the same conditions was used with a common feed to assess scaling effects. All filter devices showed <11% capacity difference and the Pod format devices showed no statistically different capacity differences. PMID:26518411

  3. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed. PMID:2180633

  4. Filter type gas sampler with filter consolidation

    DOEpatents

    Miley, Harry S.; Thompson, Robert C.; Hubbard, Charles W.; Perkins, Richard W.

    1997-01-01

    Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, whereafter the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant.

  5. Filter type gas sampler with filter consolidation

    DOEpatents

    Miley, H.S.; Thompson, R.C.; Hubbard, C.W.; Perkins, R.W.

    1997-03-25

    Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, where after the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant. 5 figs.

  6. Cordierite silicon nitride filters

    SciTech Connect

    Sawyer, J.; Buchan, B. ); Duiven, R.; Berger, M. ); Cleveland, J.; Ferri, J. )

    1992-02-01

    The objective of this project was to develop a silicon nitride based crossflow filter. This report summarizes the findings and results of the project. The project was phased with Phase I consisting of filter material development and crossflow filter design. Phase II involved filter manufacturing, filter testing under simulated conditions and reporting the results. In Phase I, Cordierite Silicon Nitride (CSN) was developed and tested for permeability and strength. Target values for each of these parameters were established early in the program. The values were met by the material development effort in Phase I. The crossflow filter design effort proceeded by developing a macroscopic design based on required surface area and estimated stresses. Then the thermal and pressure stresses were estimated using finite element analysis. In Phase II of this program, the filter manufacturing technique was developed, and the manufactured filters were tested. The technique developed involved press-bonding extruded tiles to form a filter, producing a monolithic filter after sintering. Filters manufactured using this technique were tested at Acurex and at the Westinghouse Science and Technology Center. The filters did not delaminate during testing and operated and high collection efficiency and good cleanability. Further development in areas of sintering and filter design is recommended.

  7. Evidence-Based Evaluation of Inferior Vena Cava Filter Complications Based on Filter Type.

    PubMed

    Deso, Steven E; Idakoji, Ibrahim A; Kuo, William T

    2016-06-01

    Many inferior vena cava (IVC) filter types, along with their specific risks and complications, are not recognized. The purpose of this study was to evaluate the various FDA-approved IVC filter types to determine device-specific risks, as a way to help identify patients who may benefit from ongoing follow-up versus prompt filter retrieval. An evidence-based electronic search (FDA Premarket Notification, MEDLINE, FDA MAUDE) was performed to identify all IVC filter types and device-specific complications from 1980 to 2014. Twenty-three IVC filter types (14 retrievable, 9 permanent) were identified. The devices were categorized as follows: conical (n = 14), conical with umbrella (n = 1), conical with cylindrical element (n = 2), biconical with cylindrical element (n = 2), helical (n = 1), spiral (n = 1), and complex (n = 1). Purely conical filters were associated with the highest reported risks of penetration (90-100%). Filters with cylindrical or umbrella elements were associated with the highest reported risk of IVC thrombosis (30-50%). Conical Bard filters were associated with the highest reported risks of fracture (40%). The various FDA-approved IVC filter types were evaluated for device-specific complications based on best current evidence. This information can be used to guide and optimize clinical management in patients with indwelling IVC filters. PMID:27247477

  8. Projection filters for modal parameter estimate for flexible structures

    NASA Technical Reports Server (NTRS)

    Huang, Jen-Kuang; Chen, Chung-Wen

    1987-01-01

    Single-mode projection filters are developed for eigensystem parameter estimates from both analytical results and test data. Explicit formulations of these projection filters are derived using the pseudoinverse matrices of the controllability and observability matrices in general use. A global minimum optimization algorithm is developed to update the filter parameters by using interval analysis method. Modal parameters can be attracted and updated in the global sense within a specific region by passing the experimental data through the projection filters. For illustration of this method, a numerical example is shown by using a one-dimensional global optimization algorithm to estimate model frequencies and dampings.

  9. Bag filters for TPP

    SciTech Connect

    L.V. Chekalov; Yu.I. Gromov; V.V. Chekalov

    2007-05-15

    Cleaning of TPP flue gases with bag filters capable of pulsed regeneration is examined. A new filtering element with a three-dimensional filtering material formed from a needle-broached cloth in which the filtration area, as compared with a conventional smooth bag, is increased by more than two times, is proposed. The design of a new FRMI type of modular filter is also proposed. A standard series of FRMI filters with a filtration area ranging from 800 to 16,000 m{sup 2} is designed for an output more than 1 million m{sub 3}/h of with respect to cleaned gas. The new bag filter permits dry collection of sulfur oxides from waste gases at TPP operating on high-sulfur coals. The design of the filter makes it possible to replace filter elements without taking the entire unit out of service.

  10. HEPA filter monitoring program

    NASA Astrophysics Data System (ADS)

    Kirchner, K. N.; Johnson, C. M.; Aiken, W. F.; Lucerna, J. J.; Barnett, R. L.; Jensen, R. T.

    1986-07-01

    The testing and replacement of HEPA filters, widely used in the nuclear industry to purify process air, are costly and labor-intensive. Current methods of testing filter performance, such as differential pressure measurement and scanning air monitoring, allow determination of overall filter performance but preclude detection of incipient filter failure such as small holes in the filters. Using current technology, a continual in-situ monitoring system was designed which provides three major improvements over current methods of filter testing and replacement. The improvements include: cost savings by reducing the number of intact filters which are currently being replaced unnecessarily; more accurate and quantitative measurement of filter performance; and reduced personnel exposure to a radioactive environment by automatically performing most testing operations.

  11. Novel Backup Filter Device for Candle Filters

    SciTech Connect

    Bishop, B.; Goldsmith, R.; Dunham, G.; Henderson, A.

    2002-09-18

    The currently preferred means of particulate removal from process or combustion gas generated by advanced coal-based power production processes is filtration with candle filters. However, candle filters have not shown the requisite reliability to be commercially viable for hot gas clean up for either integrated gasifier combined cycle (IGCC) or pressurized fluid bed combustion (PFBC) processes. Even a single candle failure can lead to unacceptable ash breakthrough, which can result in (a) damage to highly sensitive and expensive downstream equipment, (b) unacceptably low system on-stream factor, and (c) unplanned outages. The U.S. Department of Energy (DOE) has recognized the need to have fail-safe devices installed within or downstream from candle filters. In addition to CeraMem, DOE has contracted with Siemens-Westinghouse, the Energy & Environmental Research Center (EERC) at the University of North Dakota, and the Southern Research Institute (SRI) to develop novel fail-safe devices. Siemens-Westinghouse is evaluating honeycomb-based filter devices on the clean-side of the candle filter that can operate up to 870 C. The EERC is developing a highly porous ceramic disk with a sticky yet temperature-stable coating that will trap dust in the event of filter failure. SRI is developing the Full-Flow Mechanical Safeguard Device that provides a positive seal for the candle filter. Operation of the SRI device is triggered by the higher-than-normal gas flow from a broken candle. The CeraMem approach is similar to that of Siemens-Westinghouse and involves the development of honeycomb-based filters that operate on the clean-side of a candle filter. The overall objective of this project is to fabricate and test silicon carbide-based honeycomb failsafe filters for protection of downstream equipment in advanced coal conversion processes. The fail-safe filter, installed directly downstream of a candle filter, should have the capability for stopping essentially all particulate

  12. Stepping motor controller

    DOEpatents

    Bourret, S.C.; Swansen, J.E.

    1982-07-02

    A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  13. Stepping motor controller

    DOEpatents

    Bourret, Steven C.; Swansen, James E.

    1984-01-01

    A stepping motor is microprocessingly controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  14. MST Filterability Tests

    SciTech Connect

    Poirier, M. R.; Burket, P. R.; Duignan, M. R.

    2015-03-12

    The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). The low filter flux through the ARP has limited the rate at which radioactive liquid waste can be treated. Recent filter flux has averaged approximately 5 gallons per minute (gpm). Salt Batch 6 has had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. In addition, at the time the testing started, SRR was assessing the impact of replacing the 0.1 micron filter with a 0.5 micron filter. This report describes testing of MST filterability to investigate the impact of filter pore size and MST particle size on filter flux and testing of filter enhancers to attempt to increase filter flux. The authors constructed a laboratory-scale crossflow filter apparatus with two crossflow filters operating in parallel. One filter was a 0.1 micron Mott sintered SS filter and the other was a 0.5 micron Mott sintered SS filter. The authors also constructed a dead-end filtration apparatus to conduct screening tests with potential filter aids and body feeds, referred to as filter enhancers. The original baseline for ARP was 5.6 M sodium salt solution with a free hydroxide concentration of approximately 1.7 M.3 ARP has been operating with a sodium concentration of approximately 6.4 M and a free hydroxide concentration of approximately 2.5 M. SRNL conducted tests varying the concentration of sodium and free hydroxide to determine whether those changes had a significant effect on filter flux. The feed slurries for the MST filterability tests were composed of simple salts (NaOH, NaNO2, and NaNO3) and MST (0.2 – 4.8 g/L). The feed slurry for the filter enhancer tests contained simulated salt batch 6 supernate, MST, and filter enhancers.

  15. Survey of digital filtering

    NASA Technical Reports Server (NTRS)

    Nagle, H. T., Jr.

    1972-01-01

    A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.

  16. Birefringent filter design by use of a modified genetic algorithm.

    PubMed

    Wen, Mengtao; Yao, Jianping

    2006-06-10

    A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation angles and the element lengths are determined by the genetic algorithm to minimize the sidelobe levels of the filters. Being different from the normal genetic algorithm, the algorithm proposed reduces the problem space of the birefringent filter design to achieve faster speed and better performance. The design of 4-, 8-, and 14-section birefringent filters with an improved sidelobe suppression ratio is realized. A 4-section birefringent filter designed with the algorithm is experimentally realized. PMID:16761031

  17. The ribosome filter redux.

    PubMed

    Mauro, Vincent P; Edelman, Gerald M

    2007-09-15

    The ribosome filter hypothesis postulates that ribosomes are not simply translation machines but also function as regulatory elements that differentially affect or filter the translation of particular mRNAs. On the basis of new information, we take the opportunity here to review the ribosome filter hypothesis, suggest specific mechanisms of action, and discuss recent examples from the literature that support it. PMID:17890902

  18. Filter service system

    DOEpatents

    Sellers, Cheryl L.; Nordyke, Daniel S.; Crandell, Richard A.; Tomlins, Gregory; Fei, Dong; Panov, Alexander; Lane, William H.; Habeger, Craig F.

    2008-12-09

    According to an exemplary embodiment of the present disclosure, a system for removing matter from a filtering device includes a gas pressurization assembly. An element of the assembly is removably attachable to a first orifice of the filtering device. The system also includes a vacuum source fluidly connected to a second orifice of the filtering device.

  19. HEPA filter encapsulation

    DOEpatents

    Gates-Anderson, Dianne D.; Kidd, Scott D.; Bowers, John S.; Attebery, Ronald W.

    2003-01-01

    A low viscosity resin is delivered into a spent HEPA filter or other waste. The resin is introduced into the filter or other waste using a vacuum to assist in the mass transfer of the resin through the filter media or other waste.

  20. Practical Active Capacitor Filter

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr. (Inventor)

    2005-01-01

    A method and apparatus is described that filters an electrical signal. The filtering uses a capacitor multiplier circuit where the capacitor multiplier circuit uses at least one amplifier circuit and at least one capacitor. A filtered electrical signal results from a direct connection from an output of the at least one amplifier circuit.

  1. Bayesian filtering in electronic surveillance

    NASA Astrophysics Data System (ADS)

    Coraluppi, Stefano; Carthel, Craig

    2012-06-01

    Fusion of passive electronic support measures (ESM) with active radar data enables tracking and identification of platforms in air, ground, and maritime domains. An effective multi-sensor fusion architecture adopts hierarchical real-time multi-stage processing. This paper focuses on the recursive filtering challenges. The first challenge is to achieve effective platform identification based on noisy emitter type measurements; we show that while optimal processing is computationally infeasible, a good suboptimal solution is available via a sequential measurement processing approach. The second challenge is to process waveform feature measurements that enable disambiguation in multi-target scenarios where targets may be using the same emitters. We show that an approach that explicitly considers the Markov jump process outperforms the traditional Kalman filtering solution.

  2. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  3. Parameter estimation with an iterative version of the adaptive Gaussian mixture filter

    NASA Astrophysics Data System (ADS)

    Stordal, A.; Lorentzen, R.

    2012-04-01

    The adaptive Gaussian mixture filter (AGM) was introduced in Stordal et. al. (ECMOR 2010) as a robust filter technique for large scale applications and an alternative to the well known ensemble Kalman filter (EnKF). It consists of two analysis steps, one linear update and one weighting/resampling step. The bias of AGM is determined by two parameters, one adaptive weight parameter (forcing the weights to be more uniform to avoid filter collapse) and one pre-determined bandwidth parameter which decides the size of the linear update. It has been shown that if the adaptive parameter approaches one and the bandwidth parameter decrease with increasing sample size, the filter can achieve asymptotic optimality. For large scale applications with a limited sample size the filter solution may be far from optimal as the adaptive parameter gets close to zero depending on how well the samples from the prior distribution match the data. The bandwidth parameter must often be selected significantly different from zero in order to make large enough linear updates to match the data, at the expense of bias in the estimates. In the iterative AGM we take advantage of the fact that the history matching problem is usually estimation of parameters and initial conditions. If the prior distribution of initial conditions and parameters is close to the posterior distribution, it is possible to match the historical data with a small bandwidth parameter and an adaptive weight parameter that gets close to one. Hence the bias of the filter solution is small. In order to obtain this scenario we iteratively run the AGM throughout the data history with a very small bandwidth to create a new prior distribution from the updated samples after each iteration. After a few iterations, nearly all samples from the previous iteration match the data and the above scenario is achieved. A simple toy problem shows that it is possible to reconstruct the true posterior distribution using the iterative version of

  4. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  5. The intractable cigarette ‘filter problem’

    PubMed Central

    2011-01-01

    Background When lung cancer fears emerged in the 1950s, cigarette companies initiated a shift in cigarette design from unfiltered to filtered cigarettes. Both the ineffectiveness of cigarette filters and the tobacco industry's misleading marketing of the benefits of filtered cigarettes have been well documented. However, during the 1950s and 1960s, American cigarette companies spent millions of dollars to solve what the industry identified as the ‘filter problem’. These extensive filter research and development efforts suggest a phase of genuine optimism among cigarette designers that cigarette filters could be engineered to mitigate the health hazards of smoking. Objective This paper explores the early history of cigarette filter research and development in order to elucidate why and when seemingly sincere filter engineering efforts devolved into manipulations in cigarette design to sustain cigarette marketing and mitigate consumers' concerns about the health consequences of smoking. Methods Relevant word and phrase searches were conducted in the Legacy Tobacco Documents Library online database, Google Patents, and media and medical databases including ProQuest, JSTOR, Medline and PubMed. Results 13 tobacco industry documents were identified that track prominent developments involved in what the industry referred to as the ‘filter problem’. These reveal a period of intense focus on the ‘filter problem’ that persisted from the mid-1950s to the mid-1960s, featuring collaborations between cigarette producers and large American chemical and textile companies to develop effective filters. In addition, the documents reveal how cigarette filter researchers' growing scientific knowledge of smoke chemistry led to increasing recognition that filters were unlikely to offer significant health protection. One of the primary concerns of cigarette producers was to design cigarette filters that could be economically incorporated into the massive scale of cigarette

  6. Gabor filter based fingerprint image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  7. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  8. Regenerative particulate filter development

    NASA Technical Reports Server (NTRS)

    Descamp, V. A.; Boex, M. W.; Hussey, M. W.; Larson, T. P.

    1972-01-01

    Development, design, and fabrication of a prototype filter regeneration unit for regenerating clean fluid particle filter elements by using a backflush/jet impingement technique are reported. Development tests were also conducted on a vortex particle separator designed for use in zero gravity environment. A maintainable filter was designed, fabricated and tested that allows filter element replacement without any leakage or spillage of system fluid. Also described are spacecraft fluid system design and filter maintenance techniques with respect to inflight maintenance for the space shuttle and space station.

  9. Stepped frequency ground penetrating radar

    DOEpatents

    Vadnais, Kenneth G.; Bashforth, Michael B.; Lewallen, Tricia S.; Nammath, Sharyn R.

    1994-01-01

    A stepped frequency ground penetrating radar system is described comprising an RF signal generating section capable of producing stepped frequency signals in spaced and equal increments of time and frequency over a preselected bandwidth which serves as a common RF signal source for both a transmit portion and a receive portion of the system. In the transmit portion of the system the signal is processed into in-phase and quadrature signals which are then amplified and then transmitted toward a target. The reflected signals from the target are then received by a receive antenna and mixed with a reference signal from the common RF signal source in a mixer whose output is then fed through a low pass filter. The DC output, after amplification and demodulation, is digitized and converted into a frequency domain signal by a Fast Fourier Transform. A plot of the frequency domain signals from all of the stepped frequencies broadcast toward and received from the target yields information concerning the range (distance) and cross section (size) of the target.

  10. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  11. Bilateral step length estimation using a single inertial measurement unit attached to the pelvis

    PubMed Central

    2012-01-01

    Background The estimation of the spatio-temporal gait parameters is of primary importance in both physical activity monitoring and clinical contexts. A method for estimating step length bilaterally, during level walking, using a single inertial measurement unit (IMU) attached to the pelvis is proposed. In contrast to previous studies, based either on a simplified representation of the human gait mechanics or on a general linear regressive model, the proposed method estimates the step length directly from the integration of the acceleration along the direction of progression. Methods The IMU was placed at pelvis level fixed to the subject's belt on the right side. The method was validated using measurements from a stereo-photogrammetric system as a gold standard on nine subjects walking ten laps along a closed loop track of about 25 m, varying their speed. For each loop, only the IMU data recorded in a 4 m long portion of the track included in the calibrated volume of the SP system, were used for the analysis. The method takes advantage of the cyclic nature of gait and it requires an accurate determination of the foot contact instances. A combination of a Kalman filter and of an optimally filtered direct and reverse integration applied to the IMU signals formed a single novel method (Kalman and Optimally filtered Step length Estimation - KOSE method). A correction of the IMU displacement due to the pelvic rotation occurring in gait was implemented to estimate the step length and the traversed distance. Results The step length was estimated for all subjects with less than 3% error. Traversed distance was assessed with less than 2% error. Conclusions The proposed method provided estimates of step length and traversed distance more accurate than any other method applied to measurements obtained from a single IMU that can be found in the literature. In healthy subjects, it is reasonable to expect that, errors in traversed distance estimation during daily monitoring

  12. Ceramic fiber filter technology

    SciTech Connect

    Holmes, B.L.; Janney, M.A.

    1996-06-01

    Fibrous filters have been used for centuries to protect individuals from dust, disease, smoke, and other gases or particulates. In the 1970s and 1980s ceramic filters were developed for filtration of hot exhaust gases from diesel engines. Tubular, or candle, filters have been made to remove particles from gases in pressurized fluidized-bed combustion and gasification-combined-cycle power plants. Very efficient filtration is necessary in power plants to protect the turbine blades. The limited lifespan of ceramic candle filters has been a major obstacle in their development. The present work is focused on forming fibrous ceramic filters using a papermaking technique. These filters are highly porous and therefore very lightweight. The papermaking process consists of filtering a slurry of ceramic fibers through a steel screen to form paper. Papermaking and the selection of materials will be discussed, as well as preliminary results describing the geometry of papers and relative strengths.

  13. VSP wave separation by adaptive masking filters

    NASA Astrophysics Data System (ADS)

    Rao, Ying; Wang, Yanghua

    2016-06-01

    In vertical seismic profiling (VSP) data processing, the first step might be to separate the down-going wavefield from the up-going wavefield. When using a masking filter for VSP wave separation, there are difficulties associated with two termination ends of the up-going waves. A critical challenge is how the masking filter can restore the energy tails, the edge effect associated with these terminations uniquely exist in VSP data. An effective strategy is to implement masking filters in both τ-p and f-k domain sequentially. Meanwhile it uses a median filter, producing a clean but smooth version of the down-going wavefield, used as a reference data set for designing the masking filter. The masking filter is implemented adaptively and iteratively, gradually restoring the energy tails cut-out by any surgical mute. While the τ-p and the f-k domain masking filters target different depth ranges of VSP, this combination strategy can accurately perform in wave separation from field VSP data.

  14. Use of astronomy filters in fluorescence microscopy.

    PubMed

    Piper, Jörg

    2012-02-01

    Monochrome astronomy filters are well suited for use as excitation or suppression filters in fluorescence microscopy. Because of their particular optical design, such filters can be combined with standard halogen light sources for excitation in many fluorescent probes. In this "low energy excitation," photobleaching (fading) or other irritations of native specimens are avoided. Photomicrographs can be taken from living motile fluorescent specimens also with a flash so that fluorescence images can be created free from indistinctness caused by movement. Special filter cubes or dichroic mirrors are not needed for our method. By use of suitable astronomy filters, fluorescence microscopy can be carried out with standard laboratory microscopes equipped with condensers for bright-field (BF) and dark-field (DF) illumination in transmitted light. In BF excitation, the background brightness can be modulated in tiny steps up to dark or black. Moreover, standard industry microscopes fitted with a vertical illuminator for examinations of opaque probes in DF or BF illumination based on incident light (wafer inspections, for instance) can also be used for excitation in epi-illumination when adequate astronomy filters are inserted as excitatory and suppression filters in the illuminating and imaging light path. In all variants, transmission bands can be modulated by transmission shift. PMID:22225991

  15. A superior edge preserving filter with a systematic analysis

    NASA Technical Reports Server (NTRS)

    Holladay, Kenneth W.; Rickman, Doug

    1991-01-01

    A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data.

  16. 2-Step IMAT and 2-Step IMRT in three dimensions

    SciTech Connect

    Bratengeier, Klaus

    2005-12-15

    In two dimensions, 2-Step Intensity Modulated Arc Therapy (2-Step IMAT) and 2-Step Intensity Modulated Radiation Therapy (IMRT) were shown to be powerful methods for the optimization of plans with organs at risk (OAR) (partially) surrounded by a target volume (PTV). In three dimensions, some additional boundary conditions have to be considered to establish 2-Step IMAT as an optimization method. A further aim was to create rules for ad hoc adaptations of an IMRT plan to a daily changing PTV-OAR constellation. As a test model, a cylindrically symmetric PTV-OAR combination was used. The centrally placed OAR can adapt arbitrary diameters with different gap widths toward the PTV. Along the rotation axis the OAR diameter can vary, the OAR can even vanish at some axis positions, leaving a circular PTV. The width and weight of the second segment were the free parameters to optimize. The objective function f to minimize was the root of the integral of the squared difference of the dose in the target volume and a reference dose. For the problem, two local minima exist. Therefore, as a secondary criteria, the magnitude of hot and cold spots were taken into account. As a result, the solution with a larger segment width was recommended. From plane to plane for varying radii of PTV and OAR and for different gaps between them, different sets of weights and widths were optimal. Because only one weight for one segment shall be used for all planes (respectively leaf pairs), a strategy for complex three-dimensional (3-D) cases was established to choose a global weight. In a second step, a suitable segment width was chosen, minimizing f for this global weight. The concept was demonstrated in a planning study for a cylindrically symmetric example with a large range of different radii of an OAR along the patient axis. The method is discussed for some classes of tumor/organ at risk combinations. Noncylindrically symmetric cases were treated exemplarily. The product of width and weight of

  17. Compact planar microwave blocking filters

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop (Inventor); Wollack, Edward J. (Inventor)

    2012-01-01

    A compact planar microwave blocking filter includes a dielectric substrate and a plurality of filter unit elements disposed on the substrate. The filter unit elements are interconnected in a symmetrical series cascade with filter unit elements being organized in the series based on physical size. In the filter, a first filter unit element of the plurality of filter unit elements includes a low impedance open-ended line configured to reduce the shunt capacitance of the filter.

  18. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  19. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  20. Canonical Signed Digit Study. Part 2; FIR Digital Filter Simulation Results

    NASA Technical Reports Server (NTRS)

    Kim, Heechul

    1996-01-01

    Finite Impulse Response digital filter using Canonical Signed-Digit (CSD) number representation for the coefficients has been studied and its computer simulation results are presented here. Minimum Mean Square Error (MMSE) criterion is employed to optimize filter coefficients into the corresponding CSD numbers. To further improve coefficients optimization process, an extra non-zero bit is added for any filter coefficients exceeding 1/2. This technique improves frequency response of filter without increasing filter complexity almost at all. The simulation results show outstanding performance in bit-error-rate (BER) curve for all CSD implemented digital filters included in this presentation material.

  1. Steps in Behavior Modividation.

    ERIC Educational Resources Information Center

    Straughan, James H.; And Others

    James H. Straughan lists five steps for modifying target behavior and four steps for working with teachers using behavior modification. Grant Martin and Harold Kunzelmann then outline an instructional program for pinpointing and recording classroom behaviors. (JD)

  2. Spin-filtering at COSY

    NASA Astrophysics Data System (ADS)

    Weidemann, Christian; PAX Collaboration

    2011-05-01

    The Spin Filtering experiments at COSY and AD at CERN within the framework of the Polarized Antiproton EXperiments (PAX) are proposed to determine the spin-dependent cross sections in bar pp scattering by observation of the buildup of polarization of an initially unpolarized stored antiproton beam after multiple passage through an internal polarized gas target. In order to commission the experimental setup for the AD and to understand the relevant machine parameters spin-filtering will first be done with protons at COSY. A first major step toward this goal has been achieved with the installation of the required mini-β section in summer 2009 and it's commissioning in January 2010. The target chamber together with the atomic beam source and the so-called Breit-Rabi polarimeter have been installed and commissioned in summer 2010. In addition an openable storage cell has been used. It provides a target thickness of 5·1013 atoms/cm2. We report on the status of spin-filtering experiments at COSY and the outcome of a recent beam time including studies on beam lifetime limitations like intra-beam scattering and the electron-cooling performance as well as machine acceptance studies.

  3. Anti-resonance mixing filter

    NASA Technical Reports Server (NTRS)

    Evans, Paul S. (Inventor)

    2001-01-01

    In a closed loop control system that governs the movement of an actuator a filter is provided that attenuates the oscillations generated by the actuator when the actuator is at a resonant frequency. The filter is preferably coded into the control system and includes the following steps. Sensing the position of the actuator with an LVDT and sensing the motor position where motor drives the actuator through a gear train. When the actuator is at a resonant frequency, a lag is applied to the LVDT signal and then combined with the motor position signal to form a combined signal in which the oscillation generated by the actuator are attenuated. The control system then controls ion this combined signal. This arrangement prevents the amplified resonance present on the LVDT signal, from causing control instability, while retaining the steady state accuracy associated with the LVDT signal. It is also a characteristic of this arrangement that the signal attenuation will always coincide with the load resonance frequency of the system so that variations in the resonance frequency will not effectuate the effectiveness of the filter.

  4. Optical filtering for star trackers

    NASA Technical Reports Server (NTRS)

    Wilson, R. E.

    1973-01-01

    The optimization of optical filtering was investigated for tracking faint stars, down to the fifth magnitude. The effective wavelength and bandwidth for tracking pre-selected guide stars are discussed along with the results of an all-electronic tracker with a star tracking photomultiplier, which was tested with a simulated second magnitude star. Tables which give the sum of zodiacal light and galactic background light over the entire sky for intervals of five degrees in declination, and twenty minutes in right ascension are included.

  5. Fabric filter blinding mechanisms

    SciTech Connect

    Notestein, J.E.; Shang, J.Y.

    1982-08-01

    This discussion of various bag/cloth filter degradation mechanisms is mostly common sense. However, this information is occasionally lost in the subtleties of real-system operation. Although this paper is written with reference to fluidized-bed combustion (FBC) applications, the insights are generally applicable. For enumeration of particular filter fabric and baghouse experiences in FBC applications, the reader is referred to a report by Davy McKee Corporatin (no date). A fabric filter is a composite matrix of fibers oriented to retain the dust particles from dust-laden gas. The cleaned gas passes through the fabric filter; the retained dust particles are deposited on the surface of (and within) the fiber matrix. The retained dust can be later removed through mechanical means. The fabric may be made of any fibrous material, spun in yarn, and then woven, impacted, needled, or bonded into a felt. Deep penetration of aggregated fine particles, lack of dust removal during filter cleaning, and chars or condensed aerosols may contribute to the increase in pressure drop across the filter. This increases the filter operation power consumption and, consequently, reduces the filtration capacity. The phenomenon of building a high-pressure drop in spite of filter cleaning provisions is known as blinding. In order to maintain an acceptable gas throughput, blinding problems must be addressed. Recommendations are given: maintain temperature above dew point, use filter aids, by-pass filter during start-up or operational upsets, etc.

  6. Stepped Hydraulic Geometry in Stepped Channels

    NASA Astrophysics Data System (ADS)

    Comiti, F.; Cadol, D. D.; Wohl, E.

    2007-12-01

    Steep mountain streams typically present a stepped longitudinal profile. Such stepped channels feature tumbling flow, where hydraulic jumps represent an important source of channel roughness (spill resistance). However, the extent to which spill resistance persists up to high flows has not been ascertained yet, such that a faster, skimming flow has been envisaged to begin at those conditions. In order to analyze the relationship between flow resistance and bed morphology, a mobile bed physical model was developed at Colorado State University (Fort Collins, USA). An 8 m-long, 0.6 m-wide flume tilted at a constant 14% slope was used, testing 2 grain-size mixtures differing only for the largest fraction. Experiments were conducted under clear water conditions. Reach-averaged flow velocity was measured using salt tracers, bed morphology and flow depth by a point gage, and surface grain size using commercial image-analysis software. Starting from an initial plane bed, progressively higher flow rates were used to create different bed structures. After each bed morphology was stable with its forming discharge, lower-than-forming flows were run to build a hydraulic geometry curve. Results show that even though equilibrium slopes ranged from 8.5% to 14%, the reach-averaged flow was always sub-critical. Steps formed through a variety of mechanisms, with immobile clasts playing a dominant role by causing local scouring and/or trapping moving smaller particles. Overall, step height, step pool steepness, relative pool area and volume increased with discharge up to the threshold when the bed approached fully- mobilized conditions. For bed morphologies surpassing a minimum profile roughness, a stepped velocity- discharge relationship is evident, with sharp rises in velocity correlated with the disappearance of rollers in pools at flows approaching the formative discharge for each morphology. Flow resistance exhibits an opposite pattern, with drops in resistance being a function

  7. Designing LC filters for AC-motor drives

    SciTech Connect

    Gath, P.A.; Lucas, M.

    1995-12-31

    This paper presents practical design guidelines for designing LC filters for AC-motor drive applications. A DC choke and an electrolytic capacitor bank on the DC bus filter the voltage and the current ripples and improve the input power factor. Capacitor and choke values are derived to optimize overall filter performance. Costs associated with the respective component values can then be obtained to analyze cost trade-offs between selected values. Helpful hints are also given.

  8. Permanent versus Retrievable Inferior Vena Cava Filters: Rethinking the "One-Filter-for-All" Approach to Mechanical Thromboembolic Prophylaxis.

    PubMed

    Ghatan, Christine E; Ryu, Robert K

    2016-06-01

    Inferior vena cava (IVC) filtration for thromboembolic protection is not without risks, and there are important differences among commercially available IVC filters. While retrievable filters are approved for permanent implantation, they may be associated with higher device-related complications in the long term when compared with permanent filters. Prospective patient selection in determining which patients might be better served by permanent or retrievable filter devices is central to resource optimization, in addition to improved clinical follow-up and a concerted effort to retrieve filters when no longer needed. This article highlights the differences between permanent and retrievable devices, describes the interplay between these differences and the clinical indications for IVC filtration, advises against a "one-filter-for-all" approach to mechanical thromboembolic prophylaxis, and discusses strategies for optimizing personalized device selection. PMID:27247474

  9. Filtering separators having filter cleaning apparatus

    SciTech Connect

    Margraf, A.

    1984-08-28

    This invention relates to filtering separators of the kind having a housing which is subdivided by a partition, provided with parallel rows of holes or slots, into a dust-laden gas space for receiving filter elements positioned in parallel rows and being impinged upon by dust-laden gas from the outside towards the inside, and a clean gas space. In addition, the housing is provided with a chamber for cleansing the filter element surfaces of a row by counterflow action while covering at the same time the partition holes or slots leading to the adjacent rows of filter elements. The chamber is arranged for the supply of compressed air to at least one injector arranged to feed compressed air and secondary air to the row of filter elements to be cleansed. The chamber is also reciprocatingly displaceable along the partition in periodic and intermittent manner. According to the invention, a surface of the chamber facing towards the partition covers at least two of the rows of holes or slots of the partition, and the chamber is closed upon itself with respect to the clean gas space, and is connected to a compressed air reservoir via a distributor pipe and a control valve. At least one of the rows of holes or slots of the partition and the respective row of filter elements in flow communication therewith are in flow communication with the discharge side of at least one injector acted upon with compressed air. At least one other row of the rows of holes or slots of the partition and the respective row of filter elements is in flow communication with the suction side of the injector.

  10. One Step to Learning.

    ERIC Educational Resources Information Center

    Thornton, Carol A.; And Others

    1980-01-01

    Described are activities and games incorporating a technique of "one step" which is used with children with learning difficulties. The purpose of "one step" is twofold, to minimize difficulties with typical trouble spots and to keep the step size of the instruction small. (Author/TG)

  11. A Step Circuit Program.

    ERIC Educational Resources Information Center

    Herman, Susan

    1995-01-01

    Aerobics instructors can use step aerobics to motivate students. One creative method is to add the step to the circuit workout. By incorporating the step, aerobic instructors can accommodate various fitness levels. The article explains necessary equipment and procedures, describing sample stations for cardiorespiratory fitness, muscular strength,…

  12. Optimization of Aperiodic Waveguide Mode Converters

    SciTech Connect

    Burke, G J; White, D A; Thompson, C A

    2004-12-09

    Previous studies by Haq, Webb and others have demonstrated the design of aperiodic waveguide structures to act as filters and mode converters. These aperiodic structures have been shown to yield high efficiency mode conversion or filtering in lengths considerably shorter than structures using gradual transitions and periodic perturbations. The design method developed by Haq and others has used mode-matching models for the irregular, stepped waveguides coupled with computer optimization to achieve the design goal using a Matlab optimization routine. Similar designs are described here, using a mode matching code written in Fortran and with optimization accomplished with the downhill simplex method with simulated annealing using an algorithm from the book Numerical Recipes in Fortran. Where Haq et al. looked mainly for waveguide shapes with relatively wide cavities, we have sought lower profile designs. It is found that lower profiles can meet the design goals and result in a structure with lower Q. In any case, there appear to be very many possible configurations for a given mode conversion goal, to the point that it is unlikely to find the same design twice. Tolerance analysis was carried out for the designs to show edge sensitivity and Monte Carlo degradation rate. The mode matching code and mode conversion designs were validated by comparison with FDTD solutions for the discontinuous waveguides.

  13. Generic Kalman Filter Software

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E., II; Crues, Edwin Z.

    2005-01-01

    The Generic Kalman Filter (GKF) software provides a standard basis for the development of application-specific Kalman-filter programs. Historically, Kalman filters have been implemented by customized programs that must be written, coded, and debugged anew for each unique application, then tested and tuned with simulated or actual measurement data. Total development times for typical Kalman-filter application programs have ranged from months to weeks. The GKF software can simplify the development process and reduce the development time by eliminating the need to re-create the fundamental implementation of the Kalman filter for each new application. The GKF software is written in the ANSI C programming language. It contains a generic Kalman-filter-development directory that, in turn, contains a code for a generic Kalman filter function; more specifically, it contains a generically designed and generically coded implementation of linear, linearized, and extended Kalman filtering algorithms, including algorithms for state- and covariance-update and -propagation functions. The mathematical theory that underlies the algorithms is well known and has been reported extensively in the open technical literature. Also contained in the directory are a header file that defines generic Kalman-filter data structures and prototype functions and template versions of application-specific subfunction and calling navigation/estimation routine code and headers. Once the user has provided a calling routine and the required application-specific subfunctions, the application-specific Kalman-filter software can be compiled and executed immediately. During execution, the generic Kalman-filter function is called from a higher-level navigation or estimation routine that preprocesses measurement data and post-processes output data. The generic Kalman-filter function uses the aforementioned data structures and five implementation- specific subfunctions, which have been developed by the user on

  14. Robust Frequency Domain Acoustic Echo Cancellation Filter Employing Normalized Residual Echo Enhancement

    NASA Astrophysics Data System (ADS)

    Shimauchi, Suehiro; Haneda, Yoichi; Kataoka, Akitoshi

    We propose a new robust frequency domain acoustic echo cancellation filter that employs a normalized residual echo enhancement. By interpreting the conventional robust step-size control approaches as a statistical-model-based residual echo enhancement problem, the optimal step-size introduced in the most of conventional approaches is regarded as optimal only on the assumption that both the residual echo and the outlier in the error output signal are described by Gaussian distributions. However, the Gaussian-Gaussian mixture assumption does not always hold well, especially when both the residual echo and the outlier are speech signals (known as a double-talk situation). The proposed filtering scheme is based on the Gaussian-Laplacian mixture assumption for the signals normalized by the reference input signal amplitude. By comparing the performances of the proposed and conventional approaches through the simulations, we show that the Gaussian-Laplacian mixture assumption for the normalized signals can provide a better control scheme for the acoustic echo cancellation.

  15. The J-PAS filter system

    NASA Astrophysics Data System (ADS)

    Marin-Franch, Antonio; Taylor, Keith; Cenarro, Javier; Cristobal-Hornillos, David; Moles, Mariano

    2015-08-01

    J-PAS (Javalambre-PAU Astrophysical Survey) is a Spanish-Brazilian collaboration to conduct a narrow-band photometric survey of 8500 square degrees of northern sky using an innovative filter system of 59 filters, 56 relatively narrow-band (FWHM=14.5 nm) filters continuously populating the spectrum between 350 to 1000nm in 10nm steps, plus 3 broad-band filters. This filter system will be able to produce photometric redshifts with a precision of 0.003(1 + z) for Luminous Red Galaxies, allowing J-PAS to measure the radial scale of the Baryonic Acoustic Oscillations. The J-PAS survey will be carried out using JPCam, a 14-CCD mosaic camera using the new e2v 9k-by-9k, 10μm pixel, CCDs mounted on the JST/T250, a dedicated 2.55m wide-field telescope at the Observatorio Astrofísico de Javalambre (OAJ) near Teruel, Spain. The filters will operate in a fast (f/3.6) converging beam. The requirements for average transmissions greater than 85% in the passband, <10-5 blocking from 250 to 1050nm, steep bandpass edges and high image quality impose significant challenges for the production of the J-PAS filters that have demanded the development of new design solutions. This talk presents the J-PAS filter system and describes the most challenging requirements and adopted design strategies. Measurements and tests of the first manufactured filters are also presented.

  16. An image-dependent Metz filter for nuclear medicine images.

    PubMed

    King, M A; Penney, B C; Glick, S J

    1988-12-01

    To provide optimal image quality, digital filters should account for both the count level and the object imaged. That is, they should be image-dependent. By using the constraint equation of constrained least-squares (CLS) restoration to determine one parameter of the Metz filter, a filter which adapts to the image has been developed. This filter has been named the Constrained Least-Squares Metz filter. The filter makes use of a regression relation to convert the Metz filter parameter determined using the CLS criterion to the value which would minimize the normalized mean square error (NMSE). The regression relation and the parameters which specify the general form of the Metz filter were determined using images of the Alderson liver and spleen phantoms. The designed filter was tested for its ability to adapt to other objects with images from each of three different test objects. When the values of the Metz filter parameters for these images determined by the CLS-Metz filter were compared by a regression analysis to those which minimized the NMSE for each image, a correlation coefficient of 0.98, a slope of 0.95, and a zero intercept of 0.1 were obtained. With clinical images, the CLS-Metz filter has been shown to provide consistently good image quality with images as diverse as heart perfusion images and bone studies. PMID:3264021

  17. INEEL HEPA Filter Leach System: A Mixed Waste Solution

    SciTech Connect

    K. Archibald; K. Brewer; K. Kline; K. Pierson; K. Shackelford; M. Argyle; R. Demmer

    1999-02-01

    Calciner operations and the fuel dissolution process at the Idaho National Engineering and Environmental Laboratory have generated many mixed waste high-efficiency particulate air (HEPA)filters. The HEPA Filter Leach System located at the Idaho Nuclear Technology and Engineering Center lowers radiation contamination levels and reduces cadmium, chromium, and mercury concentrations on spent HEPA filter media to below disposal limits set by the Resource Conservation and Recovery Act (RCRA). The treated HEPA filters are disposed as low-level radioactive waste. The technical basis for the existing system was established and optimized in initial studies using simulants in 1992. The treatment concept was validated for EPA approval in 1994 by leaching six New Waste Calcining Facility spent HEPA filters. Post-leach filter media sampling results for all six filters showed that both hazardous and radiological constituent levels were reduced so the filters could be disposed of as low-level radioactive waste. Since the validation tests the HEPA Filter Leach System has processed 78 filters in 1997 and 1998. The Idaho National Engineering and Environmental Laboratory HEPA Filter Leach System is the only mixed waste HEPA treatment system in the DOE complex. This process is of interest to many of the other DOE facilities and commercial companies that have generated mixed waste HEPA filters but currently do not have a treatment option available.

  18. INEEL HEPA Filter Leach System: A Mixed Waste Solution

    SciTech Connect

    Argyle, Mark Don; Demmer, Ricky Lynn; Archibald, Kip Ernest; Brewer, Ken Neal; Pierson, Kenneth Alan; Shackelford, Kimberlee Rene; Kline, Kelli Suzanne

    1999-03-01

    Calciner operations and the fuel dissolution process at the Idaho National Engineering and Environmental Laboratory have generated many mixed waste high-efficiency particulate air (HEPA) filters. The HEPA Filter Leach System located at the Idaho Nuclear Technology and Engineering Center lowers radiation contamination levels and reduces cadmium, chromium, and mercury concentrations on spent HEPA filter media to below disposal limits set by the Resource Conservation and Recovery Act (RCRA). The treated HEPA filters are disposed as low-level radioactive waste. The technical basis for the existing system was established and optimized in initial studies using simulants in 1992. The treatment concept was validated for EPA approval in 1994 by leaching six New Waste Calcining Facility spent HEPA filters. Post-leach filter media sampling results for all six filters showed that both hazardous and radiological constituent levels were reduced so the filters could be disposed of as low-level radioactive waste. Since the validation tests the HEPA Filter Leach System has processed 78 filters in 1997 and 1998. The Idaho National Engineering and Environmental Laboratory HEPA Filter Leach System is the only mixed waste HEPA treatment system in the DOE complex. This process is of interest to many of the other DOE facilities and commercial companies that have generated mixed waste HEPA filters but currently do not have a treatment option available.

  19. A novel temporal filtering strategy for functional MRI using UNFOLD.

    PubMed

    Domsch, S; Lemke, A; Weingärtner, S; Schad, L R

    2012-08-01

    A major challenge for fMRI at high spatial resolution is the limited temporal resolution. The UNFOLD method increases image acquisition speed and potentially enables high acceleration factors in fMRI. Spatial aliasing artifacts due to interleaved k-space sampling are to be removed from the image time series by temporal filtering before statistical mapping in the time domain can be carried out. So far, low-pass filtering and multi-band filtering have been proposed. Particularly at high UNFOLD factors both methods are non-optimal. Low-pass filtering severely degrades temporal resolution and multi-band filtering leads to temporal autocorrelations affecting statistical modelling of activation. In this work, we present a novel temporal filtering strategy that significantly reduces temporal autocorrelations compared to multi-band filtering. Two datasets (finger-tapping and resting state) were post-processed using the proposed and the multi-band filter with varying set-ups (i.e. transition bands). When the proposed filtering strategy was used, a linear regression analysis revealed that the number of false positives was significantly decreased up to 34% whereas the number of activated voxels was not significantly affected for most filter parameters. In total, this led to an effective increase in the number of activated voxels per false positive for each filter set-up. At a significance level of 5%, the number of activated voxels was increased up to 41% by using the proposed filtering strategy. PMID:22484204

  20. Analysis of characteristic of microwave regeneration for diesel particulate filter

    SciTech Connect

    Ning Zhi; Zhang Guanglong; Lu Yong; Liu Junmin; Gao Xiyan; Liang Iunhui; Chen Jiahua

    1995-12-31

    The mathematical model for the microwave regeneration of diesel particulate filter is proposed according to the characteristic of microwave regeneration process. The model is used to calculate the temperature field, distribution of particulate and density field of oxygen in the filter during the process of regeneration with typical ceramic foam particulate filter data. The parametric study demonstrates how some of the main parameters, such as microwave attenuation constant of the filter, filter particulate loading, the power and distribution of microwave energy and so on, affect the efficiency of regeneration, the maximum filter temperature and regeneration duration. The results show that it is possible to regenerate the diesel particulate filters in certain conditions by using microwave energy. This paper can give one a whole understanding to several main factors that have effects on the process of microwave regeneration and provide a theoretical basis for the optimal design of the microwave regeneration system.

  1. Highly tunable microwave and millimeter wave filtering using photonic technology

    NASA Astrophysics Data System (ADS)

    Seregelyi, Joe; Lu, Ping; Paquet, Stéphane; Celo, Dritan; Mihailov, Stephen J.

    2015-05-01

    The design for a photonic microwave filter tunable in both bandwidth and operating frequency is proposed and experimentally demonstrated. The circuit is based on a single sideband modulator used in conjunction with two or more transmission fiber Bragg gratings (FBGs) cascaded in series. It is demonstrated that the optical filtering characteristics of the FBGs are instrumental in defining the shape of the microwave filter, and the numerical modeling was used to optimize these characteristics. A multiphase-shift transmission FBG design is used to increase the dynamic range of the filter, control the filter ripple, and maximize the slope of the filter skirts. Initial measurements confirmed the design theory and demonstrated a working microwave filter with a bandwidth tunable from approximately 2 to 3.5 GHz and an 18 GHz operating frequency tuning range. Further work is required to refine the FBG manufacturing process and reduce the impact of fabrication errors.

  2. Concentric Split Flow Filter

    NASA Technical Reports Server (NTRS)

    Stapleton, Thomas J. (Inventor)

    2015-01-01

    A concentric split flow filter may be configured to remove odor and/or bacteria from pumped air used to collect urine and fecal waste products. For instance, filter may be designed to effectively fill the volume that was previously considered wasted surrounding the transport tube of a waste management system. The concentric split flow filter may be configured to split the air flow, with substantially half of the air flow to be treated traveling through a first bed of filter media and substantially the other half of the air flow to be treated traveling through the second bed of filter media. This split flow design reduces the air velocity by 50%. In this way, the pressure drop of filter may be reduced by as much as a factor of 4 as compare to the conventional design.

  3. Optically tunable optical filter

    NASA Astrophysics Data System (ADS)

    James, Robert T. B.; Wah, Christopher; Iizuka, Keigo; Shimotahira, Hiroshi

    1995-12-01

    We experimentally demonstrate an optically tunable optical filter that uses photorefractive barium titanate. With our filter we implement a spectrum analyzer at 632.8 nm with a resolution of 1.2 nm. We simulate a wavelength-division multiplexing system by separating two semiconductor laser diodes, at 1560 nm and 1578 nm, with the same filter. The filter has a bandwidth of 6.9 nm. We also use the same filter to take 2.5-nm-wide slices out of a 20-nm-wide superluminescent diode centered at 840 nm. As a result, we experimentally demonstrate a phenomenal tuning range from 632.8 to 1578 nm with a single filtering device.

  4. Contactor/filter improvements

    DOEpatents

    Stelman, D.

    1988-06-30

    A contactor/filter arrangement for removing particulate contaminants from a gaseous stream is described. The filter includes a housing having a substantially vertically oriented granular material retention member with upstream and downstream faces, a substantially vertically oriented microporous gas filter element, wherein the retention member and the filter element are spaced apart to provide a zone for the passage of granular material therethrough. A gaseous stream containing particulate contaminants passes through the gas inlet means as well as through the upstream face of the granular material retention member, passing through the retention member, the body of granular material, the microporous gas filter element, exiting out of the gas outlet means. A cover screen isolates the filter element from contact with the moving granular bed. In one embodiment, the granular material is comprised of porous alumina impregnated with CuO, with the cover screen cleaned by the action of the moving granular material as well as by backflow pressure pulses. 6 figs.

  5. STEP: A Futurevision, Today

    NASA Technical Reports Server (NTRS)

    1994-01-01

    STEP (STandard for the Exchange of Product Model Data) is an innovative software tool that allows the exchange of data between different programming systems to occur and helps speed up the designing in various process industries. This exchange occurs easily between those companies that have STEP, and many industries and government agencies are requiring that their vendors utilize STEP in their computer aided design projects, such as in the areas of mechanical, aeronautical, and electrical engineering. STEP allows the process of concurrent engineering to occur and increases the quality of the design product. One example of the STEP program is the Boeing 777, the first paperless airplane.

  6. Real time inverse filter focusing through iterative time reversal.

    PubMed

    Montaldo, Gabriel; Tanter, Mickaël; Fink, Mathias

    2004-02-01

    In order to achieve an optimal focusing through heterogeneous media we need to build the inverse filter of the propagation operator. Time reversal is an easy and robust way to achieve such an inverse filter in nondissipative media. However, as soon as losses appear in the medium, time reversal is not equivalent to the inverse filter anymore. Consequently, it does not produce the optimal focusing and beam degradations may appear. In such cases, we showed in previous works that the optimal focusing can be recovered by using the so-called spatiotemporal inverse filter technique. This process requires the presence of a complete set of receivers inside the medium. It allows one to reach the optimal focusing even in extreme situations such as ultrasonic focusing through human skull or audible sound focusing in strongly reverberant rooms. But, this technique is time consuming and implied fastidious numerical calculations. In this paper we propose a new way to process this inverse filter focusing technique in real time and without any calculation. The new process is based on iterative time reversal process. Contrary to the classical inverse filter technique, this iteration does not require any computation and achieves the inverse filter in an experimental way using wave propagation instead of computational power. The convergence from time reversal to inverse filter during the iterative process is theoretically explained. Finally, the feasibility of this iterative technique is experimentally demonstrated for ultrasound applications. PMID:15000188

  7. Design-Filter Selection for H2 Control of Microgravity Isolation Systems: A Single-Degree-of-Freedom Case Study

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Whorton, Mark S.

    2000-01-01

    Many microgravity space-science experiments require active vibration isolation, to attain suitably low levels of background acceleration for useful experimental results. The design of state-space controllers by optimal control methods requires judicious choices of frequency-weighting design filters. Kinematic coupling among states greatly clouds designer intuition in the choices of these filters, and the masking effects of the state observations cloud the process further. Recent research into the practical application of H2 synthesis methods to such problems, indicates that certain steps can lead to state frequency-weighting design-filter choices with substantially improved promise of usefulness, even in the face of these difficulties. In choosing these filters on the states, one considers their relationships to corresponding design filters on appropriate pseudo-sensitivity- and pseudo-complementary-sensitivity functions. This paper investigates the application of these considerations to a single-degree-of-freedom microgravity vibration-isolation test case. Significant observations that were noted during the design process are presented. along with explanations based on the existent theory for such problems.

  8. Hybrid Filter Membrane

    NASA Technical Reports Server (NTRS)

    Laicer, Castro; Rasimick, Brian; Green, Zachary

    2012-01-01

    Cabin environmental control is an important issue for a successful Moon mission. Due to the unique environment of the Moon, lunar dust control is one of the main problems that significantly diminishes the air quality inside spacecraft cabins. Therefore, this innovation was motivated by NASA s need to minimize the negative health impact that air-suspended lunar dust particles have on astronauts in spacecraft cabins. It is based on fabrication of a hybrid filter comprising nanofiber nonwoven layers coated on porous polymer membranes with uniform cylindrical pores. This design results in a high-efficiency gas particulate filter with low pressure drop and the ability to be easily regenerated to restore filtration performance. A hybrid filter was developed consisting of a porous membrane with uniform, micron-sized, cylindrical pore channels coated with a thin nanofiber layer. Compared to conventional filter media such as a high-efficiency particulate air (HEPA) filter, this filter is designed to provide high particle efficiency, low pressure drop, and the ability to be regenerated. These membranes have well-defined micron-sized pores and can be used independently as air filters with discreet particle size cut-off, or coated with nanofiber layers for filtration of ultrafine nanoscale particles. The filter consists of a thin design intended to facilitate filter regeneration by localized air pulsing. The two main features of this invention are the concept of combining a micro-engineered straight-pore membrane with nanofibers. The micro-engineered straight pore membrane can be prepared with extremely high precision. Because the resulting membrane pores are straight and not tortuous like those found in conventional filters, the pressure drop across the filter is significantly reduced. The nanofiber layer is applied as a very thin coating to enhance filtration efficiency for fine nanoscale particles. Additionally, the thin nanofiber coating is designed to promote capture of

  9. Filter vapor trap

    DOEpatents

    Guon, Jerold

    1976-04-13

    A sintered filter trap is adapted for insertion in a gas stream of sodium vapor to condense and deposit sodium thereon. The filter is heated and operated above the melting temperature of sodium, resulting in a more efficient means to remove sodium particulates from the effluent inert gas emanating from the surface of a liquid sodium pool. Preferably the filter leaves are precoated with a natrophobic coating such as tetracosane.

  10. Thermal control design of the Lightning Mapper Sensor narrow-band spectral filter

    NASA Technical Reports Server (NTRS)

    Flannery, Martin R.; Potter, John; Raab, Jeff R.; Manlief, Scott K.

    1992-01-01

    The performance of the Lightning Mapper Sensor is dependent on the temperature shifts of its narrowband spectral filter. To perform over a 10 degree FOV with an 0.8 nm bandwidth, the filter must be 15 cm in diameter and mounted externally to the telescope optics. The filter thermal control required a filter design optimized for minimum bandpass shift with temperature, a thermal analysis of substrate materials for maximum temperature uniformity, and a thermal radiation analysis to determine the parameter sensitivity of the radiation shield for the filter, the filter thermal recovery time after occultation, and heater power to maintain filter performance in the earth-staring geosynchronous environment.

  11. A method for improving time-stepping numerics

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-04-01

    In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.

  12. Westinghouse filter update

    SciTech Connect

    Bruck, G.J.; Smeltzer, E.E.; Newby, R.A.; Bachovchin, D.M.

    1993-06-01

    The Department of Energy, Morgantown Energy Technology Center (DOE/METC), with Westinghouse are developing high temperature particulate filters for application in integrated, coal gasification combined cycle (IGCC) and pressurized fluidized bed combustion (PFBC) power generation systems. Development of these IGCC and PFBC advanced power cycles using subpilot and pilot scale facilities include the integrated operation of a high temperature particulate filter. This testing provides the basis for evaluating filter design, performance and operation characteristics in the actual process gas environment. This operating data is essential for the specification of components and materials and successful scaleup of the filter systems for demonstration and commercial application.

  13. Independent task Fourier filters

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    2001-11-01

    Since the early 1960s, a major part of optical computing systems has been Fourier pattern recognition, which takes advantage of high speed filter changes to enable powerful nonlinear discrimination in `real time.' Because filter has a task quite independent of the tasks of the other filters, they can be applied and evaluated in parallel or, in a simple approach I describe, in sequence very rapidly. Thus I use the name ITFF (independent task Fourier filter). These filters can also break very complex discrimination tasks into easily handled parts, so the wonderful space invariance properties of Fourier filtering need not be sacrificed to achieve high discrimination and good generalizability even for ultracomplex discrimination problems. The training procedure proceeds sequentially, as the task for a given filter is defined a posteriori by declaring it to be the discrimination of particular members of set A from all members of set B with sufficient margin. That is, we set the threshold to achieve the desired margin and note the A members discriminated by that threshold. Discriminating those A members from all members of B becomes the task of that filter. Those A members are then removed from the set A, so no other filter will be asked to perform that already accomplished task.

  14. Nanofiber Filters Eliminate Contaminants

    NASA Technical Reports Server (NTRS)

    2009-01-01

    With support from Phase I and II SBIR funding from Johnson Space Center, Argonide Corporation of Sanford, Florida tested and developed its proprietary nanofiber water filter media. Capable of removing more than 99.99 percent of dangerous particles like bacteria, viruses, and parasites, the media was incorporated into the company's commercial NanoCeram water filter, an inductee into the Space Foundation's Space Technology Hall of Fame. In addition to its drinking water filters, Argonide now produces large-scale nanofiber filters used as part of the reverse osmosis process for industrial water purification.

  15. Linear phase compressive filter

    DOEpatents

    McEwan, T.E.

    1995-06-06

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.

  16. Linear phase compressive filter

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.

  17. Frequency weighting filter design for automotive ride comfort evaluation

    NASA Astrophysics Data System (ADS)

    Du, Feng

    2016-04-01

    Few study gives guidance to design weighting filters according to the frequency weighting factors, and the additional evaluation method of automotive ride comfort is not made good use of in some countries. Based on the regularities of the weighting factors, a method is proposed and the vertical and horizontal weighting filters are developed. The whole frequency range is divided several times into two parts with respective regularity. For each division, a parallel filter constituted by a low- and a high-pass filter with the same cutoff frequency and the quality factor is utilized to achieve section factors. The cascading of these parallel filters obtains entire factors. These filters own a high order. But, low order filters are preferred in some applications. The bilinear transformation method and the least P-norm optimal infinite impulse response(IIR) filter design method are employed to develop low order filters to approximate the weightings in the standard. In addition, with the window method, the linear phase finite impulse response(FIR) filter is designed to keep the signal from distorting and to obtain the staircase weighting. For the same case, the traditional method produces 0.330 7 m • s-2 weighted root mean square(r.m.s.) acceleration and the filtering method gives 0.311 9 m • s-2 r.m.s. The fourth order filter for approximation of vertical weighting obtains 0.313 9 m • s-2 r.m.s. Crest factors of the acceleration signal weighted by the weighting filter and the fourth order filter are 3.002 7 and 3.011 1, respectively. This paper proposes several methods to design frequency weighting filters for automotive ride comfort evaluation, and these developed weighting filters are effective.

  18. Development and evaluation of a cleanable high efficiency steel filter

    SciTech Connect

    Bergman, W.; Larsen, G.; Weber, F.; Wilson, P.; Lopez, R.; Valha, G.; Conner, J.; Garr, J.; Williams, K.; Biermann, A.; Wilson, K.; Moore, P.; Gellner, C.; Rapchun, D. ); Simon, K.; Turley, J.; Frye, L.; Monroe, D. )

    1993-01-01

    We have developed a high efficiency steel filter that can be cleaned in-situ by reverse air pulses. The filter consists of 64 pleated cylindrical filter elements packaged into a 6l0 [times] 6l0 [times] 292 mm aluminum frame and has 13.5 m[sup 2] of filter area. The filter media consists of a sintered steel fiber mat using 2 [mu]m diameter fibers. We conducted an optimization study for filter efficiency and pressure drop to determine the filter design parameters of pleat width, pleat depth, outside diameter of the cylinder, and the total number of cylinders. Several prototype cylinders were then built and evaluated in terms of filter cleaning by reverse air pulses. The results of these studies were used to build the high efficiency steel filter. We evaluated the prototype filter for efficiency and cleanability. The DOP filter certification test showed the filter has a passing efficiency of 99.99% but a failing pressure drop of 0.80 kPa at 1,700 m[sup 3]/hr. Since we were not able to achieve a pressure drop less than 0.25 kPa, the steel filter does not meet all the criteria for a HEPA filter. Filter loading and cleaning tests using AC Fine dust showed the filter could be repeatedly cleaned by reverse air pulses. The next phase of the prototype evaluation consisted of installing the unit and support housing in the exhaust duct work of a uranium grit blaster for a field evaluation at the Y-12 Plant in Oak Ridge, TN. The grit blaster is used to clean the surface of uranium parts and generates a cloud of UO[sub 2] aerosols. We used a 1,700 m[sup 3]/hr slip stream from the 10,200 m[sup 3]/hr exhaust system.

  19. Development and evaluation of a cleanable high efficiency steel filter

    SciTech Connect

    Bergman, W.; Larsen, G.; Weber, F.; Wilson, P.; Lopez, R.; Valha, G.; Conner, J.; Garr, J.; Williams, K.; Biermann, A.; Wilson, K.; Moore, P.; Gellner, C.; Rapchun, D.; Simon, K.; Turley, J.; Frye, L.; Monroe, D.

    1993-01-01

    We have developed a high efficiency steel filter that can be cleaned in-situ by reverse air pulses. The filter consists of 64 pleated cylindrical filter elements packaged into a 6l0 {times} 6l0 {times} 292 mm aluminum frame and has 13.5 m{sup 2} of filter area. The filter media consists of a sintered steel fiber mat using 2 {mu}m diameter fibers. We conducted an optimization study for filter efficiency and pressure drop to determine the filter design parameters of pleat width, pleat depth, outside diameter of the cylinder, and the total number of cylinders. Several prototype cylinders were then built and evaluated in terms of filter cleaning by reverse air pulses. The results of these studies were used to build the high efficiency steel filter. We evaluated the prototype filter for efficiency and cleanability. The DOP filter certification test showed the filter has a passing efficiency of 99.99% but a failing pressure drop of 0.80 kPa at 1,700 m{sup 3}/hr. Since we were not able to achieve a pressure drop less than 0.25 kPa, the steel filter does not meet all the criteria for a HEPA filter. Filter loading and cleaning tests using AC Fine dust showed the filter could be repeatedly cleaned by reverse air pulses. The next phase of the prototype evaluation consisted of installing the unit and support housing in the exhaust duct work of a uranium grit blaster for a field evaluation at the Y-12 Plant in Oak Ridge, TN. The grit blaster is used to clean the surface of uranium parts and generates a cloud of UO{sub 2} aerosols. We used a 1,700 m{sup 3}/hr slip stream from the 10,200 m{sup 3}/hr exhaust system.

  20. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.