Science.gov

Sample records for optimized filtering step

  1. STEPS: A Grid Search Methodology for Optimized Peptide Identification Filtering of MS/MS Database Search Results

    SciTech Connect

    Piehowski, Paul D.; Petyuk, Vladislav A.; Sandoval, John D.; Burnum, Kristin E.; Kiebel, Gary R.; Monroe, Matthew E.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2013-03-01

    For bottom-up proteomics there are a wide variety of database searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection - referred to as STEPS - utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.

  2. Nonlinear optimal semirecursive filtering

    NASA Astrophysics Data System (ADS)

    Daum, Frederick E.

    1996-05-01

    This paper describes a new hybrid approach to filtering, in which part of the filter is recursive but another part in non-recursive. The practical utility of this notion is to reduce computational complexity. In particular, if the non- recursive part of the filter is sufficiently small, then such a filter might be cost-effective to run in real-time with computer technology available now or in the future.

  3. Optimization of integrated polarization filters.

    PubMed

    Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J

    2014-10-01

    This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics. PMID:25360980

  4. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

    1998-04-30

    Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based

  5. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

    2002-06-30

    Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program

  6. Optimal rate filters for biomedical point processes.

    PubMed

    McNames, James

    2005-01-01

    Rate filters are used to estimate the mean event rate of many biomedical signals that can be modeled as point processes. Historically these filters have been designed using principles from two distinct fields. Signal processing principles are used to optimize the filter's frequency response. Kernel estimation principles are typically used to optimize the asymptotic statistical properties. This paper describes a design methodology that combines these principles from both fields to optimize the frequency response subject to constraints on the filter's order, symmetry, time-domain ripple, DC gain, and minimum impulse response. Initial results suggest that time-domain ripple and a negative impulse response are necessary to design a filter with a reasonable frequency response. This suggests that some of the common assumptions about the properties of rate filters should be reconsidered. PMID:17282132

  7. Adaptive Mallow's optimization for weighted median filters

    NASA Astrophysics Data System (ADS)

    Rachuri, Raghu; Rao, Sathyanarayana S.

    2002-05-01

    This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.

  8. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108

  9. Steps Toward Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2006-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a

  10. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  11. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  12. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the

  13. Improving particle filters in rainfall-runoff models: application of the resample-move step and development of the ensemble Gaussian particle filter

    NASA Astrophysics Data System (ADS)

    Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.

    2012-12-01

    The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.

  14. GNSS data filtering optimization for ionospheric observation

    NASA Astrophysics Data System (ADS)

    D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.

    2015-12-01

    In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15°, 20° or 30°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy

  15. Constrained filter optimization for subsurface landmine detection

    NASA Astrophysics Data System (ADS)

    Torrione, Peter A.; Collins, Leslie; Clodfelter, Fred; Lulich, Dan; Patrikar, Ajay; Howard, Peter; Weaver, Richard; Rosen, Erik

    2006-05-01

    Previous large-scale blind tests of anti-tank landmine detection utilizing the NIITEK ground penetrating radar indicated the potential for very high anti-tank landmine detection probabilities at very low false alarm rates for algorithms based on adaptive background cancellation schemes. Recent data collections under more heterogeneous multi-layered road-scenarios seem to indicate that although adaptive solutions to background cancellation are effective, the adaptive solutions to background cancellation under different road conditions can differ significantly, and misapplication of these adaptive solutions can reduce landmine detection performance in terms of PD/FAR. In this work we present a framework for the constrained optimization of background-estimation filters that specifically seeks to optimize PD/FAR performance as measured by the area under the ROC curve between two FARs. We also consider the application of genetic algorithms to the problem of filter optimization for landmine detection. Results indicate robust results for both static and adaptive background cancellation schemes, and possible real-world advantages and disadvantages of static and adaptive approaches are discussed.

  16. On optimal filtering of measured Mueller matrices

    NASA Astrophysics Data System (ADS)

    Gil, José J.

    2016-07-01

    While any two-dimensional mixed state of polarization of light can be represented by a combination of a pure state and a fully random state, any Mueller matrix can be represented by a convex combination of a pure component and three additional components whose randomness is scaled in a proper and objective way. Such characteristic decomposition constitutes the appropriate framework for the characterization of the polarimetric randomness of the system represented by a given Mueller matrix, and provides criteria for the optimal filtering of noise in experimental polarimetry.

  17. Optimal edge filters explain human blur detection.

    PubMed

    McIlhagga, William H; May, Keith A

    2012-01-01

    Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222

  18. Optimization of phononic filters via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Hussein, M. I.; El-Beltagy, M. A.

    2007-12-01

    A phononic crystal is commonly characterized by its dispersive frequency spectrum. With appropriate spatial distribution of the constituent material phases, spectral stop bands could be generated. Moreover, it is possible to control the number, the width, and the location of these bands within a frequency range of interest. This study aims at exploring the relationship between unit cell configuration and frequency spectrum characteristics. Focusing on 1D layered phononic crystals, and longitudinal wave propagation in the direction normal to the layering, the unit cell features of interest are the number of layers and the material phase and relative thickness of each layer. An evolutionary search for binary- and ternary-phase cell designs exhibiting a series of stop bands at predetermined frequencies is conducted. A specially formulated representation and set of genetic operators that break the symmetries in the problem are developed for this purpose. An array of optimal designs for a range of ratios in Young's modulus and density are obtained and the corresponding objective values (the degrees to which the resulting bands match the predetermined targets) are examined as a function of these ratios. It is shown that a rather complex filtering objective could be met with a high degree of success. Structures composed of the designed phononic crystals are excellent candidates for use in a wide range of applications including sound and vibration filtering.

  19. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-12-31

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  20. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-01-01

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  1. An Adaptive Fourier Filter for Relaxing Time Stepping Constraints for Explicit Solvers

    SciTech Connect

    Gelb, Anne; Archibald, Richard K

    2015-01-01

    Filtering is necessary to stabilize piecewise smooth solutions. The resulting diffusion stabilizes the method, but may fail to resolve the solution near discontinuities. Moreover, high order filtering still requires cost prohibitive time stepping. This paper introduces an adaptive filter that controls spurious modes of the solution, but is not unnecessarily diffusive. Consequently we are able to stabilize the solution with larger time steps, but also take advantage of the accuracy of a high order filter.

  2. Geomagnetic field modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Five individual 5 year mini-batch geomagnetic models were generated and two computer programs were developed to process the models. The first program computes statistics (mean sigma, weighted sigma) on the changes in the first derivatives (linear terms) of the spherical harmonic coefficients between mini-batches. The program ran successfully. The statistics are intended for use in computing the state noise matrix required in the information filter. The second program is the information filter. Most subroutines used in the filter were tested, but the coefficient statistics must be analyzed before the filter is run.

  3. Illumination system design with multi-step optimization

    NASA Astrophysics Data System (ADS)

    Magarill, Simon; Cassarly, William J.

    2015-08-01

    Automatic optimization algorithms can be used when designing illumination systems. For systems with many design variables, optimization using an adjustable set of variables at different steps of the process can provide different local minima. We present a few examples of implementing a multi-step optimization method. We have found that this approach can sometimes lead to more efficient solutions. In this paper we illustrate the effectiveness of using a commercially available optimization algorithm with a slightly modified procedure.

  4. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  5. Optimal filter bandwidth for pulse oximetry

    NASA Astrophysics Data System (ADS)

    Stuban, Norbert; Niwayama, Masatsugu

    2012-10-01

    Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.

  6. Novel Compact Ultra-Wideband Bandpass Filter by Application of Short-Circuited Stubs and Stepped-Impedance-Resonator

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Ping; Ma, Zhewang; Anada, Tetsuo

    To realize the compact ultra-wideband (UWB) bandpass filters, a novel filter prototype with two short-circuited stubs loaded at both sides of a stepped-impedance resonator (SIR) via the parallel coupled lines is proposed based on a distributed filter synthesis theory. The equivalent circuit of this filter is established, while the corresponding 7-pole Chebyshev-type transfer function is derived for filter synthesis. Then, a distributed-circuit-based technique was presented to synthesize the elements' values of this filter. As an example, a FCC UWB filter with the fractional bandwidth (FWB) @ -10dB up to 110% was designed using the proposed prototype and then re-modeled by commercial microwave circuit simulator to verify the correctness and accuracy of the synthesis theory. Furthermore, in terms of EM simulator, the filter was further-optimized and experimentally-realized by using microstrip line. Good agreements between the measurement results and theoretical ones validate the effectiveness of our technique. In addition, compared with the conventional SIR-type UWB filter without short-circuited stubs, the new one significantly improves the selectivity and out-of-band characteristics (especially in lower one -45dB@1-2GHz) to satisfy the FCC's spectrum mask. The designed filter also exhibits very compact size, quite low insertion loss, steep skirts, flat group delay and the easily-fabricatable structure (the coupling gap dimension in this filter is 0.15mm) as well. Moreover, it should be noted that, in terms of the presented design technique, the proposed filter prototype can be also used to easily realize the UWB filters with other FBW even greater than 110%.

  7. Optimal Gain Filter Design for Perceptual Acoustic Echo Suppressor

    NASA Astrophysics Data System (ADS)

    Kim, Kihyeon; Ko, Hanseok

    This Letter proposes an optimal gain filter for the perceptual acoustic echo suppressor. We designed an optimally-modified log-spectral amplitude estimation algorithm for the gain filter in order to achieve robust suppression of echo and noise. A new parameter including information about interferences (echo and noise) of single-talk duration is statistically analyzed, and then the speech absence probability and the a posteriori SNR are judiciously estimated to determine the optimal solution. The experiments show that the proposed gain filter attains a significantly improved reduction of echo and noise with less speech distortion.

  8. Entropy-based optimization of wavelet spatial filters.

    PubMed

    Farina, Darino; Kamavuako, Ernest Nlandu; Wu, Jian; Naddeo, Francesco

    2008-03-01

    A new class of spatial filters for surface electromyographic (EMG) signal detection is proposed. These filters are based on the 2-D spatial wavelet decomposition of the surface EMG recorded with a grid of electrodes and inverse transformation after zeroing a subset of the transformation coefficients. The filter transfer function depends on the selected mother wavelet in the two spatial directions. Wavelet parameterization is proposed with the aim of signal-based optimization of the transfer function of the spatial filter. The optimization criterion was the minimization of the entropy of the time samples of the output signal. The optimized spatial filter is linear and space invariant. In simulated and experimental recordings, the optimized wavelet filter showed increased selectivity with respect to previously proposed filters. For example, in simulation, the ratio between the peak-to-peak amplitude of action potentials generated by motor units 20 degrees apart in the transversal direction was 8.58% (with monopolar recording), 2.47% (double differential), 2.59% (normal double differential), and 0.47% (optimized wavelet filter). In experimental recordings, the duration of the detected action potentials decreased from (mean +/- SD) 6.9 +/- 0.3 ms (monopolar recording), to 4.5 +/- 0.2 ms (normal double differential), 3.7 +/- 0.2 (double differential), and 3.0 +/- 0.1 ms (optimized wavelet filter). In conclusion, the new class of spatial filters with the proposed signal-based optimization of the transfer function allows better discrimination of individual motor unit activities in surface EMG recordings than it was previously possible. PMID:18334382

  9. Optimization-based tuning of LPV fault detection filters for civil transport aircraft

    NASA Astrophysics Data System (ADS)

    Ossmann, D.; Varga, A.

    2013-12-01

    In this paper, a two-step optimal synthesis approach of robust fault detection (FD) filters for the model based diagnosis of sensor faults for an augmented civil aircraft is suggested. In the first step, a direct analytic synthesis of a linear parameter varying (LPV) FD filter is performed for the open-loop aircraft using an extension of the nullspace based synthesis method to LPV systems. In the second step, a multiobjective optimization problem is solved for the optimal tuning of the LPV detector parameters to ensure satisfactory FD performance for the augmented nonlinear closed-loop aircraft. Worst-case global search has been employed to assess the robustness of the fault detection system in the presence of aerodynamics uncertainties and estimation errors in the aircraft parameters. An application of the proposed method is presented for the detection of failures in the angle-of-attack sensor.

  10. Geomagnetic modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Estes, R. H.

    1981-01-01

    The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

  11. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  12. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  13. Bayes optimal template matching for spike sorting - combining fisher discriminant analysis with optimal filtering.

    PubMed

    Franke, Felix; Quian Quiroga, Rodrigo; Hierlemann, Andreas; Obermayer, Klaus

    2015-06-01

    Spike sorting, i.e., the separation of the firing activity of different neurons from extracellular measurements, is a crucial but often error-prone step in the analysis of neuronal responses. Usually, three different problems have to be solved: the detection of spikes in the extracellular recordings, the estimation of the number of neurons and their prototypical (template) spike waveforms, and the assignment of individual spikes to those putative neurons. If the template spike waveforms are known, template matching can be used to solve the detection and classification problem. Here, we show that for the colored Gaussian noise case the optimal template matching is given by a form of linear filtering, which can be derived via linear discriminant analysis. This provides a Bayesian interpretation for the well-known matched filter output. Moreover, with this approach it is possible to compute a spike detection threshold analytically. The method can be implemented by a linear filter bank derived from the templates, and can be used for online spike sorting of multielectrode recordings. It may also be applicable to detection and classification problems of transient signals in general. Its application significantly decreases the error rate on two publicly available spike-sorting benchmark data sets in comparison to state-of-the-art template matching procedures. Finally, we explore the possibility to resolve overlapping spikes using the template matching outputs and show that they can be resolved with high accuracy. PMID:25652689

  14. Laboratory experiment of a coronagraph based on step-transmission filters

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Zhu, Yongtian; Ren, Deqing; Zhang, Xi

    2008-07-01

    This paper presents the first results of a step-transmission-filter based coronagraph in the visible wavelengths. The primary goal of this work is to demonstrate the feasibility of the coronagraph that employs step-transmission filters, with a required contrast in the order of better than 10-5 at an angular distance larger than 4λ/D. Two 13-step-transmission filters were manufactured with 5% transmission accuracy. The precision of the transmitted wave distortion and the coating surface quality were not strictly controlled at this time. Although in perfect case the coronagraph can achieve theoretical contrast of 10-10, it only delivers 10-5 contrast because of the transmission error, poor surface quality and wave-front aberration stated above, which is in our estimation. Based on current techniques, step-transmission filters with better coating surface quality and high-precision transmission can be made. As a follow-up effort, high-quality step-transmission filters are being manufactured, which should deliver better performance. The step-transmission-filter based coronagraph has the potential applications for future high-contrast direct imaging of earth-like planets.

  15. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  16. Optimal filtering methods to structural damage estimation under ground excitation.

    PubMed

    Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

    2013-01-01

    This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

  17. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  18. Single-channel noise reduction using optimal rectangular filtering matrices.

    PubMed

    Long, Tao; Chen, Jingdong; Benesty, Jacob; Zhang, Zhenxi

    2013-02-01

    This paper studies the problem of single-channel noise reduction in the time domain and presents a block-based approach where a vector of the desired speech signal is recovered by filtering a frame of the noisy signal with a rectangular filtering matrix. With this formulation, the noise reduction problem becomes one of estimating an optimal filtering matrix. To achieve such estimation, a method is introduced to decompose a frame of the clean speech signal into two orthogonal components: One correlated and the other uncorrelated with the current desired speech vector to be estimated. Different optimization cost functions are then formulated from which non-causal optimal filtering matrices are derived. The relationships among these optimal filtering matrices are discussed. In comparison with the classical sample-based technique that uses only forward prediction, the block-based method presented in this paper exploits both the forward and backward prediction as well as the temporal interpolation and, therefore, can improve the noise reduction performance by fully taking advantage of the speech property of self correlation. There is also a side advantage of this block-based method as compared to the sample-based technique, i.e., it is computationally more efficient and, as a result, more suitable for practical implementation. PMID:23363124

  19. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  20. Sub-Optimal Ensemble Filters and distributed hydrologic modeling: a new challenge in flood forecasting

    NASA Astrophysics Data System (ADS)

    Baroncini, F.; Castelli, F.

    2009-09-01

    Data assimilation techniques based on Ensemble Filtering are widely regarded as the best approach in solving forecast and calibration problems in geophysics models. Often the implementation of statistical optimal techniques, like the Ensemble Kalman Filter, is unfeasible because of the large amount of replicas used in each time step of the model for updating the error covariance matrix. Therefore the sub optimal approach seems to be a more suitable choice. Various sub-optimal techniques were tested in atmospheric and oceanographic models, some of them are based on the detection of a "null space". Distributed Hydrologic Models differ from the other geo-fluid-dynamics models in some fundamental aspects that make complex to understanding the relative efficiency of the different suboptimal techniques. Those aspects include threshold processes , preferential trajectories for convection and diffusion, low observability of the main state variables and high parametric uncertainty. This research study is focused on such topics and explore them through some numerical experiments on an continuous hydrologic model, MOBIDIC. This model include both water mass balance and surface energy balance, so it's able to assimilate a wide variety of datasets like traditional hydrometric "on ground" measurements or land surface temperature retrieval from satellite. The experiments that we present concern to a basin of 700 kmq in center Italy, with hourly dataset on a 8 months period that includes both drought and flood events, in this first set of experiment we worked on a low spatial resolution version of the hydrologic model (3.2 km). A new Kalman Filter based algorithm is presented : this filter try to address the main challenges of hydrological modeling uncertainty. The proposed filter use in Forecast step a COFFEE (Complementary Orthogonal Filter For Efficient Ensembles) approach with a propagation of both deterministic and stochastic ensembles to improve robustness and convergence

  1. Na-Faraday rotation filtering: The optimal point

    PubMed Central

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  2. Na-Faraday rotation filtering: the optimal point.

    PubMed

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  3. Optimization of the development process for air sampling filter standards

    NASA Astrophysics Data System (ADS)

    Mena, RaJah Marie

    Air monitoring is an important analysis technique in health physics. However, creating standards which can be used to calibrate detectors used in the analysis of the filters deployed for air monitoring can be challenging. The activity of a standard should be well understood, this includes understanding how the location within the filter affects the final surface emission rate. The purpose of this research is to determine the parameters which most affect uncertainty in an air filter standard and optimize these parameters such that calibrations made with them most accurately reflect the true activity contained inside. A deposition pattern was chosen from literature to provide the best approximation of uniform deposition of material across the filter. Samples sets were created varying the type of radionuclide, amount of activity (high activity at 6.4 -- 306 Bq/filter and one low activity 0.05 -- 6.2 Bq/filter, and filter type. For samples analyzed for gamma or beta contaminants, the standards created with this procedure were deemed sufficient. Additional work is needed to reduce errors to ensure this is a viable procedure especially for alpha contaminants.

  4. Optimal Correlation Filters for Images with Signal-Dependent Noise

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Walkup, John F.

    1994-01-01

    We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.

  5. Optimization of narrow optical spectral filters for nonparallel monochromatic radiation.

    PubMed

    Linder, S L

    1967-07-01

    This paper delineates a method of determining the design criteria for narrow optical passband filters used in the reception of nonparallel modulated monochromatic radiation. The analysis results in straightforward mathematical expressions for calculating the filter width and design center wavelength which maximize the signal-to-noise ratio. Two cases are considered: (a) the filter is designed to have a maximum transmission (for normal incidence) at the incident wavelength, but with the spectral width optimized, and (b) both the design wavelength and the spectral width are optimized. It is shown that the voltage signal-to-noise ratio for case (b) is 2((1/2)) that of case (a). Numerical examples are calculated. PMID:20062163

  6. Opdic (optimized Peak, Distortion and Clutter) Detection Filter.

    NASA Astrophysics Data System (ADS)

    House, Gregory Philip

    1995-01-01

    Detection is considered. This involves determining regions of interest (ROIs) in a scene: the locations of multiple object classes in a scene in clutter when object distortions and contrast differences are present. High probability of detection P_{D} is essential and low P_{FA } is desirable since subsequent stages in the full system will only decrease P_{FA } and cannot increase P_{D }. Low resolution blob objects and objects with more internal detail are considered with both 3-D aspect view and depression angle distortions present. Extensive tests were conducted on 56 scenes with object classes not present in the training set. A modified MINACE (Minimum Noise and Correlation Energy) distortion-invariant filter was used. This minimizes correlation plane energy due to distortions and clutter while satisfying correlation peak constraint values for various object-aspect views. The filter was modified with a new object model (to give predictable output peak values) and a new correlated noise clutter model; a white Gaussian noise model of distortion was used; and a new techniques to increase the number of training set images (N _{T}) included in the filter were developed. Excellent results were obtained. However, the correlation plane distortion and clutter energy functions were found to become worse as N_{T } was increased and no rigorous method exists to select the best N_{T} (when to stop filter synthesis). A new OPDIC (Optimized Peak, Distortion, and Clutter) filter was thus devised. This filter retained the new object, clutter and distortion models noted. It minimizes the variance of the correlation peak values for all training set images (not just the N_{T} images). As N _{T} increases, the peak variance and the objective functions (correlation plane distortion and clutter energy) are all minimized. Thus, this new filter optimizes the desired functions and provides an easy way to stop filter synthesis (when the objective function is minimized). Tests show

  7. Improved step-by-step chromaticity compensation method for chromatic sextupole optimization

    NASA Astrophysics Data System (ADS)

    Gang-Wen, Liu; Zheng-He, Bai; Qi-Ka, Jia; Wei-Min, Li; Lin, Wang

    2016-05-01

    The step-by-step chromaticity compensation method for chromatic sextupole optimization and dynamic aperture increase was proposed by E. Levichev and P. Piminov (E. Levichev and P. Piminov, 2006). Although this method can be used to enlarge the dynamic aperture of a storage ring, it has some drawbacks. In this paper, we combined this method with evolutionary computation algorithms, and proposed an improved version of this method. In the improved method, the drawbacks are avoided, and thus better optimization results can be obtained. Supported by National Natural Science Foundation of China (11175182, 11175180)

  8. Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement

    NASA Astrophysics Data System (ADS)

    Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.

    In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.

  9. Degeneracy, frequency response and filtering in IMRT optimization

    NASA Astrophysics Data System (ADS)

    Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D.; Promberger, Claus

    2004-07-01

    This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques.

  10. Optimal color image restoration: Wiener filter and quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.

  11. Optimized Beam Sculpting with Generalized Fringe-rate Filters

    NASA Astrophysics Data System (ADS)

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina

    2016-03-01

    We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.

  12. Clever particle filters, sequential importance sampling and the optimal proposal

    NASA Astrophysics Data System (ADS)

    Snyder, Chris

    2014-05-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

  13. Fourier Spectral Filter Array for Optimal Multispectral Imaging.

    PubMed

    Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo

    2016-04-01

    Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data. PMID:26849867

  14. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  15. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  16. System-level optimization of baseband filters for communication applications

    NASA Astrophysics Data System (ADS)

    Delgado-Restituto, Manuel; Fernandez-Bootello, Juan F.; Rodriguez-Vazquez, Angel

    2003-04-01

    In this paper, we present a design approach for the high-level synthesis of programmable continuous-time Gm-C and active-RC filters with optimum trade-off among dynamic range, distortion products generation, area consumption and power dissipation, thus meeting the needs of more demanding baseband filter realizations. Further, the proposed technique guarantees that under all programming configurations, transconductors (in Gm-C filters) and resistors (in active-RC filters) as well as capacitors, are related by integer ratios in order to reduce the sensitivity to mismatch of the monolithic implementation. In order to solve the aforementioned trade-off, the filter must be properly scaled at each configuration. It means that filter node impedances must be conveniently altered so that the noise contribution of each node to the filter output be as low as possible, while avoiding that peak amplitudes at such nodes be so high as to drive active circuits into saturation. Additionally, in order to not degrade the distortion performance of the filter (in particular, if it is implemented using Gm-C techniques) node impedances can not be scaled independently from each other but restrictions must be imposed according to the principle of nonlinear cancellation. Altogether, the high-level synthesis can be seen as a constrained optimization problem where some of the variables, namely, the ratios among similar components, are restricted to discrete values. The proposed approach to accomplish optimum filter scaling under all programming configurations, relies on matrix methods for network representation, which allows an easy estimation of performance features such as dynamic range and power dissipation, as well as other network properties such as sensitivity to parameter variations and non-ideal effects of integrators blocks; and the use of a simulated annealing algorithm to explore the design space defined by the transfer and group delay specifications. It must be noted that such

  17. A high-contrast imaging polarimeter with a stepped-transmission filter based coronagraph

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Chao; Ren, De-Qing; Zhu, Yong-Tian; Dou, Jiang-Pei; Guo, Jing

    2016-05-01

    The light reflected from planets is polarized mainly due to Rayleigh scattering, but starlight is normally unpolarized. Thus it provides an approach to enhance the imaging contrast by inducing the imaging polarimetry technique. In this paper, we propose a high-contrast imaging polarimeter that is optimized for the direct imaging of exoplanets, combined with our recently developed stepped-transmission filter based coronagraph. Here we present the design and calibration method of the polarimetry system and the associated test of its high-contrast performance. In this polarimetry system, two liquid crystal variable retarders (LCVRs) act as a polarization modulator, which can extract the polarized signal. We show that our polarimeter can achieve a measurement accuracy of about 0.2% at a visible wavelength (632.8 nm) with linearly polarized light. Finally, the whole system demonstrates that a contrast of 10‑9 at 5λ/D is achievable, which can be used for direct imaging of Jupiter-like planets with a space telescope.

  18. Laboratory experiment of a high-contrast imaging coronagraph with new step-transmission filters

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Ren, Deqing; Zhu, Yongtian; Zhang, Xi

    2009-08-01

    We present the latest results of our laboratory experiment of the coronagraph with step-transmission filters. The primary goal of this work is to test the stability of the coronagraph and identify the main factors that limit its performance. At present, a series of step-transmission filters has been designed. These filters were manufactured with Cr film on a glass substrate with a high surface quality. During the process of the experiment of each filter, we have identified several contrast limiting factors, which includes the non-symmetry of the coating film, transmission error, scattered light and the optical aberration caused by the thickness difference of coating film. To eliminate these factors, we developed a procedure for the correct test of the coronagraph and finally it delivered a contrast in the order of 10-6~10-7 at an angular distance of 4λD, which is well consistent with theoretical design. As a follow-up effort, a deformable mirror has been manufactured to correct the wave-front error of the optical system, which should deliver better performance with an extra contrast improvement in the order of 10-2~10-3. It is shown that the step-transmission filter based coronagraph is promising for the high-contrast imaging of earth-like planets.

  19. A Neural Network-Based Optimal Spatial Filter Design Method for Motor Imagery Classification

    PubMed Central

    Yuksel, Ayhan; Olmez, Tamer

    2015-01-01

    In this study, a novel spatial filter design method is introduced. Spatial filtering is an important processing step for feature extraction in motor imagery-based brain-computer interfaces. This paper introduces a new motor imagery signal classification method combined with spatial filter optimization. We simultaneously train the spatial filter and the classifier using a neural network approach. The proposed spatial filter network (SFN) is composed of two layers: a spatial filtering layer and a classifier layer. These two layers are linked to each other with non-linear mapping functions. The proposed method addresses two shortcomings of the common spatial patterns (CSP) algorithm. First, CSP aims to maximize the between-classes variance while ignoring the minimization of within-classes variances. Consequently, the features obtained using the CSP method may have large within-classes variances. Second, the maximizing optimization function of CSP increases the classification accuracy indirectly because an independent classifier is used after the CSP method. With SFN, we aimed to maximize the between-classes variance while minimizing within-classes variances and simultaneously optimizing the spatial filter and the classifier. To classify motor imagery EEG signals, we modified the well-known feed-forward structure and derived forward and backward equations that correspond to the proposed structure. We tested our algorithm on simple toy data. Then, we compared the SFN with conventional CSP and its multi-class version, called one-versus-rest CSP, on two data sets from BCI competition III. The evaluation results demonstrate that SFN is a good alternative for classifying motor imagery EEG signals with increased classification accuracy. PMID:25933101

  20. A neural network-based optimal spatial filter design method for motor imagery classification.

    PubMed

    Yuksel, Ayhan; Olmez, Tamer

    2015-01-01

    In this study, a novel spatial filter design method is introduced. Spatial filtering is an important processing step for feature extraction in motor imagery-based brain-computer interfaces. This paper introduces a new motor imagery signal classification method combined with spatial filter optimization. We simultaneously train the spatial filter and the classifier using a neural network approach. The proposed spatial filter network (SFN) is composed of two layers: a spatial filtering layer and a classifier layer. These two layers are linked to each other with non-linear mapping functions. The proposed method addresses two shortcomings of the common spatial patterns (CSP) algorithm. First, CSP aims to maximize the between-classes variance while ignoring the minimization of within-classes variances. Consequently, the features obtained using the CSP method may have large within-classes variances. Second, the maximizing optimization function of CSP increases the classification accuracy indirectly because an independent classifier is used after the CSP method. With SFN, we aimed to maximize the between-classes variance while minimizing within-classes variances and simultaneously optimizing the spatial filter and the classifier. To classify motor imagery EEG signals, we modified the well-known feed-forward structure and derived forward and backward equations that correspond to the proposed structure. We tested our algorithm on simple toy data. Then, we compared the SFN with conventional CSP and its multi-class version, called one-versus-rest CSP, on two data sets from BCI competition III. The evaluation results demonstrate that SFN is a good alternative for classifying motor imagery EEG signals with increased classification accuracy. PMID:25933101

  1. An optimization-based parallel particle filter for multitarget tracking

    NASA Astrophysics Data System (ADS)

    Sutharsan, S.; Sinha, A.; Kirubarajan, T.; Farooq, M.

    2005-09-01

    Particle filter based estimation is becoming more popular because it has the capability to effectively solve nonlinear and non-Gaussian estimation problems. However, the particle filter has high computational requirements and the problem becomes even more challenging in the case of multitarget tracking. In order to perform data association and estimation jointly, typically an augmented state vector of target dynamics is used. As the number of targets increases, the computation required for each particle increases exponentially. Thus, parallelization is a possibility in order to achieve the real time feasibility in large-scale multitarget tracking applications. In this paper, we present a real-time feasible scheduling algorithm that minimizes the total computation time for the bus connected heterogeneous primary-secondary architecture. This scheduler is capable of selecting the optimal number of processors from a large pool of secondary processors and mapping the particles among the selected processors. Furthermore, we propose a less communication intensive parallel implementation of the particle filter without sacrificing tracking accuracy using an efficient load balancing technique, in which optimal particle migration is ensured. In this paper, we present the mathematical formulations for scheduling the particles as well as for particle migration via load balancing. Simulation results show the tracking performance of our parallel particle filter and the speedup achieved using parallelization.

  2. Multidisciplinary Analysis and Optimization Generation 1 and Next Steps

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia Gutierrez

    2008-01-01

    The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.

  3. Novel two-step filtering scheme for a logging-while-drilling system

    NASA Astrophysics Data System (ADS)

    Zhao, Qingjie; Zhang, Baojun; Hu, Huosheng

    2009-09-01

    A logging-while-drilling (LWD) system is usually deployed in the oil drilling process in order to provide real-time monitoring of the position and orientation of a hole. Encoded signals including the data coming from down-hole sensors are inevitably contaminated during their collection and transmission to the surface. Before decoding the signals into different physical parameters, the noise should be filtered out to guarantee that correct parameter values could be acquired. In this paper, according to the characteristics of LWD signals, we propose a novel two-step filtering scheme in which a dynamic part mean filtering algorithm is proposed to separate the direct current components and a windowed limited impulse response (FIR) algorithm is deployed to filter out the high-frequency noise. The scheme has been integrated into the surface processing software and the whole LWD system for the horizontal well drilling. Some experimental results are presented to show the feasibility and good performance of the proposed two-step filtering scheme.

  4. "The Design of a Compact, Wide Spurious-Suppression Bandwidth Bandpass Filter Using Stepped Impedance Resonators"

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop; Wollack, Edward J.; Doiron, Terence; Papapolymerou, John; Laskar, Joy

    2005-01-01

    We propose an analytical design for a microstrip broadband spurious-suppression filter. The proposed design uses every section of the transmission lines as both a coupling and a spurious suppression element, which creates a very compact, planar filter. While a traditional filter length is greater than the multiple of the quarter wavelength at the center passband frequency (lambda(sub g)/4), the proposed filter length is less than (order n(Ssup th) + l)center dot lambda(sub g)/8. The filter s spurious response and physical dimension are controlled by the step impedance ratio (R) between two transmission line sections as a lambda(sub g)/4 resonator. The experimental result shows that, with R of 0.2, the out-of-band attenuation is greater than 40 dB; and the first spurious mode is shifted to more than 5 times the fundamental frequency. Moreover, it is the most compact planar filter design to date. The results also indicate a low in-band insertion loss.

  5. Comparison of IMRT planning with two-step and one-step optimization: a strategy for improving therapeutic gain and reducing the integral dose

    NASA Astrophysics Data System (ADS)

    Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

    2009-12-01

    The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

  6. Quantum demolition filtering and optimal control of unstable systems.

    PubMed

    Belavkin, V P

    2012-11-28

    A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one. PMID:23091216

  7. Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.

    PubMed

    McMinn, Brian R

    2013-11-01

    Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. PMID:23796954

  8. A geometric method for optimal design of color filter arrays.

    PubMed

    Hao, Pengwei; Li, Yan; Lin, Zhouchen; Dubois, Eric

    2011-03-01

    A color filter array (CFA) used in a digital camera is a mosaic of spectrally selective filters, which allows only one color component to be sensed at each pixel. The missing two components of each pixel have to be estimated by methods known as demosaicking. The demosaicking algorithm and the CFA design are crucial for the quality of the output images. In this paper, we present a CFA design methodology in the frequency domain. The frequency structure, which is shown to be just the symbolic DFT of the CFA pattern (one period of the CFA), is introduced to represent images sampled with any rectangular CFAs in the frequency domain. Based on the frequency structure, the CFA design involves the solution of a constrained optimization problem that aims at minimizing the demosaicking error. To decrease the number of parameters and speed up the parameter searching, the optimization problem is reformulated as the selection of geometric points on the boundary of a convex polygon or the surface of a convex polyhedron. Using our methodology, several new CFA patterns are found, which outperform the currently commercialized and published ones. Experiments demonstrate the effectiveness of our CFA design methodology and the superiority of our new CFA patterns. PMID:20858581

  9. On one-step worst-case optimal trisection in univariate bi-objective Lipschitz optimization

    NASA Astrophysics Data System (ADS)

    Žilinskas, Antanas; Gimbutienė, Gražina

    2016-06-01

    The bi-objective Lipschitz optimization with univariate objectives is considered. The concept of the tolerance of the lower Lipschitz bound over an interval is generalized to arbitrary subintervals of the search region. The one-step worst-case optimality of trisecting an interval with respect to the resulting tolerance is established. The theoretical investigation supports the previous usage of trisection in other algorithms. The trisection-based algorithm is introduced. Some numerical examples illustrating the performance of the algorithm are provided.

  10. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10. PMID:25826934

  11. Effects of Rate-Limiting Steps in Transcription Initiation on Genetic Filter Motifs

    PubMed Central

    Häkkinen, Antti; Tran, Huy; Yli-Harja, Olli; Ribeiro, Andre S.

    2013-01-01

    The behavior of genetic motifs is determined not only by the gene-gene interactions, but also by the expression patterns of the constituent genes. Live single-molecule measurements have provided evidence that transcription initiation is a sequential process, whose kinetics plays a key role in the dynamics of mRNA and protein numbers. The extent to which it affects the behavior of cellular motifs is unknown. Here, we examine how the kinetics of transcription initiation affects the behavior of motifs performing filtering in amplitude and frequency domain. We find that the performance of each filter is degraded as transcript levels are lowered. This effect can be reduced by having a transcription process with more steps. In addition, we show that the kinetics of the stepwise transcription initiation process affects features such as filter cutoffs. These results constitute an assessment of the range of behaviors of genetic motifs as a function of the kinetics of transcription initiation, and thus will aid in tuning of synthetic motifs to attain specific characteristics without affecting their protein products. PMID:23940576

  12. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690

  13. Optimizing Parameters of Process-Based Terrestrial Ecosystem Model with Particle Filter

    NASA Astrophysics Data System (ADS)

    Ito, A.

    2014-12-01

    Present terrestrial ecosystem models still contain substantial uncertainties, as model intercomparison studies have shown, because of poor model constraint by observational data. So, development of advanced methodology of data-model fusion, or data-assimilation, is an important task to reduce the uncertainties and improve model predictability. In this study, I apply the Particle filter (or Sequential Monte Carlo filer) to optimize parameters of a process-based terrestrial ecosystem model (VISIT). The Particle filter is one of the data-assimilation methods, in which probability distribution of model state is approximated by many samples of parameter set (i.e., particle). This is a computationally intensive method and applicable to nonlinear systems; this is an advantage of the method in comparison with other techniques like Ensemble Kalman filter and variational method. At several sites, I used flux measurement data of atmosphere-ecosystem CO2 exchange in sequential and non-sequential manners. In the sequential data assimilation, a time-series data at 30-min or daily steps were used to optimize gas-exchange-related parameters; this method would be also effective to assimilate satellite observational data. On the other hand, in the non-sequential case, annual or long-term mean budget was adjusted to observations; this method would be also effective to assimilate carbon stock data. Although there remain technical issues (e.g., appropriate number of particles and likelihood function), I demonstrate that the Partile filter is an effective method of data-assimilation for process-based models, enhancing collaboration between field and model researchers.

  14. Effect of embedded unbiasedness on discrete-time optimal FIR filtering estimates

    NASA Astrophysics Data System (ADS)

    Zhao, Shunyi; Shmaliy, Yuriy S.; Liu, Fei; Ibarra-Manzano, Oscar; Khan, Sanowar H.

    2015-12-01

    Unbiased estimation is an efficient alternative to optimal estimation when the noise statistics are not fully known and/or the model undergoes temporary uncertainties. In this paper, we investigate the effect of embedded unbiasedness (EU) on optimal finite impulse response (OFIR) filtering estimates of linear discrete time-invariant state-space models. A new OFIR-EU filter is derived by minimizing the mean square error (MSE) subject to the unbiasedness constraint. We show that the OFIR-UE filter is equivalent to the minimum variance unbiased FIR (UFIR) filter. Unlike the OFIR filter, the OFIR-EU filter does not require the initial conditions. In terms of accuracy, the OFIR-EU filter occupies an intermediate place between the UFIR and OFIR filters. Contrary to the UFIR filter which MSE is minimized by the optimal horizon of N opt points, the MSEs in the OFIR-EU and OFIR filters diminish with N and these filters are thus full-horizon. Based upon several examples, we show that the OFIR-UE filter has higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR and Kalman filters.

  15. Optimal design of multichannel fiber Bragg grating filters using Pareto multi-objective optimization algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Liu, Tundong; Jiang, Hao

    2016-01-01

    A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.

  16. Optimization of the performances of correlation filters by pre-processing the input plane

    NASA Astrophysics Data System (ADS)

    Bouzidi, F.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Fakhfakh, A.

    2016-01-01

    We report findings on the optimization of the performances of correlation filters. First, we propound and validate an optimization of ROC curves adapted to correlation technique. Then, analysis suggests that a pre-processing of the input plane leads to a compromise between the robustness of the adapted filter and the discrimination of the inverse filter for face recognition applications. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase the performances of a VanderLugt correlator.

  17. A Triple-band Bandpass Filter using Tri-section Step-impedance and Capacitively Loaded Step-impedance Resonators for GSM, WiMAX, and WLAN systems

    NASA Astrophysics Data System (ADS)

    Chomtong, P.; Akkaraekthalin, P.

    2014-05-01

    This paper presents a triple-band bandpass filter for applications of GSM, WiMAX, and WLAN systems. The proposed filter comprises of the tri-section step-impedance and capacitively loaded step-impedance resonators, which are combined using the cross coupling technique. Additionally, tapered lines are used to connect at both ports of the filter in order to enhance matching for the tri-band resonant frequencies. The filter can operate at the resonant frequencies of 1.8 GHz, 3.7 GHz, and 5.5 GHz. At resonant frequencies, the measured values of S11 are -17.2 dB, -33.6 dB, and -17.9 dB, while the measured values of S21 are -2.23 dB, -2.98 dB, and -3.31 dB, respectively. Moreover, the presented filter has compact size compared with the conventional open-loop cross coupling triple band bandpass filters

  18. Design of SLM-constrained MACE filters using simulated annealing optimization

    NASA Astrophysics Data System (ADS)

    Khan, Ajmal; Rajan, P. Karivaratha

    1993-10-01

    Among the available filters for pattern recognition, the MACE filter produces the sharpest peak with very small sidelobes. However, when these filters are implemented using practical spatial light modulators (SLMs), because of the constrained nature of the amplitude and phase modulation characteristics of the SLM, the implementation is no longer optimal. The resulting filter response does not produce high accuracy in the recognition of the test images. In this paper, this deterioration in response is overcome by designing constrained MACE filters such that the filter is allowed to have only those values of phase-amplitude combination that can be implemented on a specified SLM. The design is carried out using simulated annealing optimization technique. The algorithm developed and the results obtained on computer simulations of the designed filters are presented.

  19. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    SciTech Connect

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  20. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGESBeta

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  1. Pattern recognition with composite correlation filters designed with multi-objective combinatorial optimization

    NASA Astrophysics Data System (ADS)

    Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul

    2015-03-01

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  2. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  3. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  4. Using a scale selective tendency filter and forward-backward time stepping to calculate consistent semi-Lagrangian trajectories

    NASA Astrophysics Data System (ADS)

    Alerskans, Emy; Kaas, Eigil

    2016-04-01

    In semi-Lagrangian models used for climate and NWP the trajectories are normally/often determined kinematically. Here we propose a new method for calculating trajectories in a more dynamically consistent way by pre-integrating the governing equations in a pseudo-Lagrangian manner using a short time step. Only non-advective adiabatic terms are included in this calculation, i.e., the Coriolis and pressure gradient force plus gravity in the momentum equations, and the divergence term in the continuity equation. This integration is performed with a forward-backward time step. Optionally, the tendencies are filtered with a local space filter, which reduces the phase speed of short wave gravity and sound waves. The filter relaxes the time step limitation related to high frequency oscillations without compromising locality of the solution. The filter can be considered as an alternative to less local or global semi-implicit solvers. Once trajectories are estimated over a complete long advective time step the full set of governing equations is stepped forward using these trajectories in combination with a flux form semi-Lagrangian formulation of the equations. The methodology is designed to improve consistency and scalability on massively parallel systems, although here it has only been verified that the technique produces realistic results in a shallow water model and a 2D model based on the full Euler equations.

  5. Optimized digital filtering techniques for radiation detection with HPGe detectors

    NASA Astrophysics Data System (ADS)

    Salathe, Marco; Kihm, Thomas

    2016-02-01

    This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.

  6. Optimization of FIR Digital Filters Using a Real Parameter Parallel Genetic Algorithm and Implementations.

    NASA Astrophysics Data System (ADS)

    Xu, Dexiang

    This dissertation presents a novel method of designing finite word length Finite Impulse Response (FIR) digital filters using a Real Parameter Parallel Genetic Algorithm (RPPGA). This algorithm is derived from basic Genetic Algorithms which are inspired by natural genetics principles. Both experimental results and theoretical studies in this work reveal that the RPPGA is a suitable method for determining the optimal or near optimal discrete coefficients of finite word length FIR digital filters. Performance of RPPGA is evaluated by comparing specifications of filters designed by other methods with filters designed by RPPGA. The parallel and spatial structures of the algorithm result in faster and more robust optimization than basic genetic algorithms. A filter designed by RPPGA is implemented in hardware to attenuate high frequency noise in a data acquisition system for collecting seismic signals. These studies may lead to more applications of the Real Parameter Parallel Genetic Algorithms in Electrical Engineering.

  7. Reduced Complexity HMM Filtering With Stochastic Dominance Bounds: A Convex Optimization Approach

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Vikram; Rojas, Cristian R.

    2014-12-01

    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds.

  8. Algorithmic and architectural optimizations for computationally efficient particle filtering.

    PubMed

    Sankaranarayanan, Aswin C; Srivastava, Ankur; Chellappa, Rama

    2008-05-01

    In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper. PMID:18390378

  9. Implementation and optimization of an improved morphological filtering algorithm for speckle removal based on DSPs

    NASA Astrophysics Data System (ADS)

    Liu, Qitao; Li, Yingchun; Sun, Huayan; Zhao, Yanzhong

    2008-03-01

    Laser active imaging system, which is of high resolution, anti-jamming and can be three-dimensional (3-D) imaging, has been used widely. But its imagery is usually affected by speckle noise which makes the grayscale of pixels change violently, hides the subtle details and makes the imaging resolution descend greatly. Removing speckle noise is one of the most difficult problems encountered in this system because of the poor statistical property of speckle. Based on the analysis of the statistical characteristic of speckle and morphological filtering algorithm, in this paper, an improved multistage morphological filtering algorithm is studied and implemented on TMS320C6416 DSP. The algorithm makes the morphological open-close and close-open transformation by using two different linear structure elements respectively, and then takes a weighted average over the above transformational results. The weighted coefficients are decided by the statistical characteristic of speckle. This algorithm is implemented on the TMS320C6416 DSPs after simulation on computer. The procedure of software design is fully presented. The methods are fully illustrated to achieve and optimize the algorithm in the research of the structural characteristic of TMS320C6416 DSP and feature of the algorithm. In order to fully benefit from such devices and increase the performance of the whole system, it is necessary to take a series of steps to optimize the DSP programs. This paper introduces some effective methods, including refining code structure, eliminating memory dependence, optimizing assembly code via linear assembly and so on, for TMS320C6x C language optimization and then offers the results of the application in a real-time implementation. The results of processing to the images blurred by speckle noise shows that the algorithm can not only effectively suppress speckle noise but also preserve the geometrical features of images. The results of the optimized code running on the DSP platform

  10. Comparison of older adults' steps per day using NL-1000 pedometer and two GT3X+ accelerometer filters.

    PubMed

    Barreira, Tiago V; Brouillette, Robert M; Foil, Heather C; Keller, Jeffrey N; Tudor-Locke, Catrine

    2013-10-01

    The purpose of this study was to compare the steps/d derived from the ActiGraph GT3X+ using the manufacturer's default filter (DF) and low-frequency-extension filter (LFX) with those from the NL-1000 pedometer in an older adult sample. Fifteen older adults (61-82 yr) wore a GT3X+ (24 hr/day) and an NL-1000 (waking hours) for 7 d. Day was the unit of analysis (n = 86 valid days) comparing (a) GT3X+ DF and NL-1000 steps/d and (b) GT3X+ LFX and NL-1000 steps/d. DF was highly correlated with NL-1000 (r = .80), but there was a significant mean difference (-769 steps/d). LFX and NL-1000 were highly correlated (r = .90), but there also was a significant mean difference (8,140 steps/d). Percent difference and absolute percent difference between DF and NL-1000 were -7.4% and 16.0%, respectively, and for LFX and NL-1000 both were 121.9%. Regardless of filter used, GT3X+ did not provide comparable pedometer estimates of steps/d in this older adult sample. PMID:23170752

  11. Bio-desulfurization of biogas using acidic biotrickling filter with dissolved oxygen in step feed recirculation.

    PubMed

    Chaiprapat, Sumate; Charnnok, Boonya; Kantachote, Duangporn; Sung, Shihwu

    2015-03-01

    Triple stage and single stage biotrickling filters (T-BTF and S-BTF) were operated with oxygenated liquid recirculation to enhance bio-desulfurization of biogas. Empty bed retention time (EBRT 100-180 s) and liquid recirculation velocity (q 2.4-7.1 m/h) were applied. H2S removal and sulfuric acid recovery increased with higher EBRT and q. But the highest q at 7.1 m/h induced large amount of liquid through the media, causing a reduction in bed porosity in S-BTF and H2S removal. Equivalent performance of S-BTF and T-BTF was obtained under the lowest loading of 165 gH2S/m(3)/h. In the subsequent continuous operation test, it was found that T-BTF could maintain higher H2S elimination capacity and removal efficiency at 175.6±41.6 gH2S/m(3)/h and 89.0±6.8% versus S-BTF at 159.9±42.8 gH2S/m(3)/h and 80.1±10.2%, respectively. Finally, the relationship between outlet concentration and bed height was modeled. Step feeding of oxygenated liquid recirculation in multiple stages clearly demonstrated an advantage for sulfide oxidation. PMID:25569031

  12. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J.; Kapteyn, Henry C.

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  13. Optimized filtering of regional and teleseismic seismograms: results of maximizing SNR measurements from the wavelet transform and filter banks

    SciTech Connect

    Leach, R.R.; Schultz, C.; Dowla, F.

    1997-07-15

    Development of a worldwide network to monitor seismic activity requires deployment of seismic sensors in areas which have not been well studied or may have from available recordings. Development and testing of detection and discrimination algorithms requires a robust representative set of calibrated seismic events for a given region. Utilizing events with poor signal-to-noise (SNR) can add significant numbers to usable data sets, but these events must first be adequately filtered. Source and path effects can make this a difficult task as filtering demands are highly varied as a function of distance, event magnitude, bearing, depth etc. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. In addition, filter parameters are often overly generalized or contain complicated switching. We have developed a method to provide an optimized filter for any regional or teleseismically recorded event. Recorded seismic signals contain arrival energy which is localized in frequency and time. Localized temporal signals whose frequency content is different from the frequency content of the pre-arrival record are identified using rms power measurements. The method is based on the decomposition of a time series into a set of time series signals or scales. Each scale represents a time-frequency band with a constant Q. SNR is calculated for a pre-event noise window and for a window estimated to contain the arrival. Scales with high SNR are used to indicate the band pass limits for the optimized filter.The results offer a significant improvement in SNR particularly for low SNR events. Our method provides a straightforward, optimized filter which can be immediately applied to unknown regions as knowledge of the geophysical characteristics is not required. The filtered signals can be used to map the seismic frequency response of a region and may provide improvements in travel-time picking, bearing estimation

  14. An optimal numerical filter for wide-field-of-view measurements of earth-emitted radiation

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; House, F. B.

    1981-01-01

    A technique is described in which all data points along an arc of the orbit may be used in an optimal numerical filter for wide-field-of-view measurements of earth emitted radiation. The statistical filter design is derived whereby the filter is required to give a minimum variance estimate of the radiative exitance at discrete points along the ground track of the satellite. An equation for the optimal numerical filter is given by minimizing the estimate error variance equation with respect to the filter weights, resulting in a discrete form of the Wiener-Hopf equation. Finally, variances of the errors in the radiant exitance can be computed along the ground track and in the cross track directions.

  15. Particle filter with one-step randomly delayed measurements and unknown latency probability

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Huang, Yulong; Li, Ning; Zhao, Lin

    2016-01-01

    In this paper, a new particle filter is proposed to solve the nonlinear and non-Gaussian filtering problem when measurements are randomly delayed by one sampling time and the latency probability of the delay is unknown. In the proposed method, particles and their weights are updated in Bayesian filtering framework by considering the randomly delayed measurement model, and the latency probability is identified by maximum likelihood criterion. The superior performance of the proposed particle filter as compared with existing methods and the effectiveness of the proposed identification method of latency probability are both illustrated in two numerical examples concerning univariate non-stationary growth model and bearing only tracking.

  16. Optimization of continuous tube motion and step-and-shoot motion in digital breast tomosynthesis systems with patient motion

    NASA Astrophysics Data System (ADS)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2012-03-01

    In digital breast tomosynthesis (DBT), a reconstruction of the breast is generated from projections acquired over a limited range of x-ray tube angles. There are two principal schemes for acquiring projections, continuous tube motion and step-and-shoot motion. Although continuous tube motion has the benefit of reducing patient motion by lowering scan time, it has the drawback of introducing blurring artifacts due to focal spot motion. The purpose of this work is to determine the optimal scan time which minimizes this trade-off. To this end, the filtered backprojection reconstruction of a sinusoidal input is calculated. At various frequencies, the optimal scan time is determined by the value which maximizes the modulation of the reconstruction. Although prior authors have studied the dependency of the modulation on focal spot motion, this work is unique in also modeling patient motion. It is shown that because continuous tube motion and patient motion have competing influences on whether scan time should be long or short, the modulation is maximized by an intermediate scan time. This optimal scan time decreases with object velocity and increases with exposure time. To optimize step-and-shoot motion, we calculate the scan time for which the modulation attains the maximum value achievable in a comparable system with continuous tube motion. This scan time provides a threshold below which the benefits of step-and-shoot motion are justified. In conclusion, this work optimizes scan time in DBT systems with patient motion and either continuous tube motion or step-and-shoot motion by maximizing the modulation of the reconstruction.

  17. Linear adaptive noise-reduction filters for tomographic imaging: Optimizing for minimum mean square error

    SciTech Connect

    Sun, W Y

    1993-04-01

    This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

  18. Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared spectroscopy

    PubMed Central

    Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.

    2011-01-01

    An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (≤ 32 cm−1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm−1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

  19. Design of an optimal-weighted MACE filter realizable with arbitrary SLM constraints

    NASA Astrophysics Data System (ADS)

    Ge, Jin; Rajan, P. Karivaratha

    1996-03-01

    A realizable optimal weighted minimum average correlation energy (MACE) filter with arbitrary spatial light modulator (SLM) constraints is presented. The MACE filter can be considered as the cascade of two separate stages. The first stage is the prewhitener which essentially converts colored noise to white noise. The second stage is the conventional synthetic discriminant function (SDF) which is optimal for white noise, but which uses training vectors subjected to the prewhitening transformation. So the energy spectrum matrix is very important for filter design. New weight function we introduce is used to adjust the correlation energy to improve the performance of MACE filter on current SLMs. The action of the weight function is to emphasize the importance of the signal energy at some frequencies and reduce the importance of signal energy at some other frequencies so as to improve correlation plane structure. The choice of weight function which is used to enhance the noise tolerance and reduce sidelobes is related to a priori pattern recognition knowledge. An algorithm which combines an iterative optimal technique with Juday's minimum Euclidean distance (MED) method is developed for the design of the realizable optimal weighted MACE filter. The performance of the designed filter is evaluated with numerical experiments.

  20. On the application of optimal wavelet filter banks for ECG signal classification

    NASA Astrophysics Data System (ADS)

    Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvão, R. K. H.

    2014-03-01

    This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.

  1. Biologic efficacy optimization-a step towards personalized medicine.

    PubMed

    Kiely, Patrick D W

    2016-05-01

    This following is a review of the factors that influence the outcome of biologic agents in the treatment of adult RA and, when synthesized into the clinical decision-making process, enhance optimization. Adiposity can exacerbate inflammatory diseases; patients with high BMI have worse outcomes from RA, including TNF inhibitors (TNFis), whereas the efficacy of abatacept and tocilizumab is unaffected. Smoking adversely affects TNFi outcomes but has less or no effect on the efficacy of rituximab and tocilizumab, and the effect on abatacept is unknown. Patients who are positive for ACPA and RF have better efficacy with rituximab and abatacept than those who are seronegative, whereas the influence of serotype is less significant for tocilizumab and more complex for TNFis. All biologics seem to do better when co-prescribed with MTX, whereas in monotherapy, tocilizumab is superior to adalimumab and prescription of a non-MTX DMARD has advantages over no DMARD for rituximab and adalimumab. Monitoring of TNFi drug levels is an exciting new field, correlating closely with efficacy in RA and PsA, and is influenced by BMI, adherence, co-prescribed DMARDs and anti-drug antibodies. The measurement of trough levels provides a potential tool for patients who are not doing well to determine early whether to switch within the TNFi class (if levels are low) or to a biologic with an alternative mode of action (if levels are normal or high). Conversely, the finding of supratherapeutic levels has the potential to enable individual patient selection for dose reduction without the risk of flare. PMID:26424837

  2. Empirical Determination of Optimal Parameters for Sodium Double-Edge Magneto-Optic Filters

    NASA Astrophysics Data System (ADS)

    Barry, Ian F.; Huang, Wentao; Smith, John A.; Chu, Xinzhao

    2016-06-01

    A method is proposed for determining the optimal temperature and magnetic field strength used to condition a sodium vapor cell for use in a sodium Double-Edge Magneto-Optic Filter (Na-DEMOF). The desirable characteristics of these filters are first defined and then analyzed over a range of temperatures and magnetic field strengths, using an IDL Faraday filter simulation adapted for the Na-DEMOF. This simulation is then compared to real behavior of a Na-DEMOF constructed for use with the Chu Research Group's STAR Na Doppler resonance-fluorescence lidar for lower atmospheric observations.

  3. Optimization of primer specific filter metrics for the assessment of mitochondrial DNA sequence data

    PubMed Central

    CURTIS, PAMELA C.; THOMAS, JENNIFER L.; PHILLIPS, NICOLE R.; ROBY, RHONDA K.

    2011-01-01

    Filter metrics are used as a quick assessment of sequence trace files in order to sort data into different categories, i.e. High Quality, Review, and Low Quality, without human intervention. The filter metrics consist of two numerical parameters for sequence quality assessment: trace score (TS) and contiguous read length (CRL). Primer specific settings for the TS and CRL were established using a calibration dataset of 2817 traces and validated using a concordance dataset of 5617 traces. Prior to optimization, 57% of the traces required manual review before import into a sequence analysis program, whereas after optimization only 28% of the traces required manual review. After optimization of primer specific filter metrics for mitochondrial DNA sequence data, an overall reduction of review of trace files translates into increased throughput of data analysis and decreased time required for manual review. PMID:21171863

  4. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486

  5. Multiple Model Adaptive Two-Step Filter and Motion Tracking Sliding-Mode Guidance for Missiles with Time Lag in Acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Di; Zhang, Yong-An; Duan, Guang-Ren

    The two-step filter has been combined with a modified Sage-Husa time-varying measurement noise statistical estimator, which is able to estimate the covariance of measurement noise on line, to generate an adaptive two-step filter. In many practical applications such as the bearings-only guidance, some model parameters and the process noise covariance are also unknown a priori. Based on the adaptive two-step filter, we utilize multiple models in the first-step filtering as well as in the time update of the second-step filtering to handle the uncertainties of model parameters and process noise covariance. In each timestep of the multiple model filtering, probabilistic weights punishing the estimates of first-step state from different models, and their associated covariance matrices are acquired according to Bayes’ rule. The weighted sum of the estimates of first-step state and that of the associated covariance matrices are extracted as the ultimate estimate and covariance of the first-step state, and are used as measurement information for the measurement update of the second-step state. Thus there is still only one iteration process and no apparent enhancement of computation burden. A motion tracking sliding-mode guidance law is presented for missiles with non-negligible delays in actual acceleration. This guidance law guarantees guidance accuracy and is able to enhance observability in bearings-only tracking. In bearings-only cases, the multiple model adaptive two-step filter is applied to the motion tracking sliding-mode guidance law, supplying relative range, relative velocity, and target acceleration information. In simulation experiments satisfactory filtering and guidance results are obtained, even if the filter runs into unknown target maneuvers and unknown time-varying measurement noise covariance, and the guidance law has to deal with a large time lag in acceleration.

  6. Two-stage hybrid optimization of fiber Bragg gratings for design of linear phase filters.

    PubMed

    Zheng, Rui Tao; Ngo, Nam Quoc; Le Binh, Nguyen; Tjin, Swee Chuan

    2004-12-01

    We present a new hybrid optimization method for the synthesis of fiber Bragg gratings (FBGs) with complex characteristics. The hybrid optimization method is a two-tier search that employs a global optimization algorithm [i.e., the tabu search (TS) algorithm] and a local optimization method (i.e., the quasi-Netwon method). First the TS global optimization algorithm is used to find a "promising" FBG structure that has a spectral response as close as possible to the targeted spectral response. Then the quasi-Newton local optimization method is applied to further optimize the FBG structure obtained from the TS algorithm to arrive at a targeted spectral response. A dynamic mechanism for weighting of different requirements of the spectral response is employed to enhance the optimization efficiency. To demonstrate the effectiveness of the method, the synthesis of three linear-phase optical filters based on FBGs with different grating lengths is described. PMID:15603077

  7. Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

    USGS Publications Warehouse

    Safak, Erdal

    1989-01-01

    This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

  8. Optimal filter parameters for low SNR seismograms as a function of station and event location

    NASA Astrophysics Data System (ADS)

    Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.

    1999-06-01

    Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.

  9. A three-step test of phosphate sorption efficiency of potential agricultural drainage filter materials.

    PubMed

    Lyngsie, G; Borggaard, O K; Hansen, H C B

    2014-03-15

    Phosphorus (P) eutrophication of lakes and streams, coming from drained farmlands, is a serious problem in areas with intensive agriculture. Installation of P sorbing filters at drain outlets may be a solution. Efficient sorbents to be used for such filters must possess high P bonding affinity to retain ortho-phosphate (Pi) at low concentrations. In addition high P sorption capacity, fast bonding and low desorption is necessary. In this study five potential filter materials (Filtralite-P(®), limestone, calcinated diatomaceous earth, shell-sand and iron-oxide based CFH) in four particle size intervals were investigated under field relevant P concentrations (0-161 μM) and retentions times of 0-24 min. Of the five materials examined, the results from P sorption and desorption studies clearly demonstrate that the iron based CFH is superior as a filter material compared to calcium based materials when tested against criteria for sorption affinity, capacity and stability. The finest CFH and Filtralite-P(®) fractions (0.05-0.5 mm) were best with P retention of ≥90% of Pi from an initial concentration of 161 μM corresponding to 14.5 mmol/kg sorbed within 24 min. They were further capable to retain ≥90% of Pi from an initially 16 μM solution within 1½ min. However, only the finest CFH fraction was also able to retain ≥90% of Pi sorbed from the 16 μM solution against 4 times desorption sequences with 6 mM KNO3. Among the materials investigated, the finest CFH fraction is therefore the only suitable filter material, when very fast and strong bonding of high Pi concentrations is needed, e.g. in drains under P rich soils during extreme weather conditions. PMID:24275107

  10. Improved design and optimization of subsurface flow constructed wetlands and sand filters

    NASA Astrophysics Data System (ADS)

    Brovelli, A.; Carranza-Díaz, O.; Rossi, L.; Barry, D. A.

    2010-05-01

    Subsurface flow constructed wetlands and sand filters are engineered systems capable of eliminating a wide range of pollutants from wastewater. These devices are easy to operate, flexible and have low maintenance costs. For these reasons, they are particularly suitable for small settlements and isolated farms and their use has substantially increased in the last 15 years. Furthermore, they are also becoming used as a tertiary - polishing - step in traditional treatment plants. Recent work observed that research is however still necessary to understand better the biogeochemical processes occurring in the porous substrate, their mutual interactions and feedbacks, and ultimately to identify the optimal conditions to degrade or remove from the wastewater both traditional and anthropogenic recalcitrant pollutants, such as hydrocarbons, pharmaceuticals, personal care products. Optimal pollutant elimination is achieved if the contact time between microbial biomass and the contaminated water is sufficiently long. The contact time depends on the hydraulic residence time distribution (HRTD) and is controlled by the hydrodynamic properties of the system. Previous reports noted that poor hydrodynamic behaviour is frequent, with water flowing mainly through preferential paths resulting in a broad HRTD. In such systems the flow rate must be decreased to allow a sufficient proportion of the wastewater to experience the minimum residence time. The pollutant removal efficiency can therefore be significantly reduced, potentially leading to the failure of the system. The aim of this work was to analyse the effect of the heterogeneous distribution of the hydraulic properties of the porous substrate on the HRTD and treatment efficiency, and to develop an improved design methodology to reduce the risk of system failure and to optimize existing systems showing poor hydrodynamics. Numerical modelling was used to evaluate the effect of substrate heterogeneity on the breakthrough curves of

  11. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  12. Optimal matched filter design for ultrasonic NDE of coarse grain materials

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Hayward, Gordon

    2016-02-01

    Coarse grain materials are widely used in a variety of key industrial sectors like energy, oil and gas, and aerospace due to their attractive properties. However, when these materials are inspected using ultrasound, the flaw echoes are usually contaminated by high-level, correlated grain noise originating from the material microstructures, which is time-invariant and demonstrates similar spectral characteristics as flaw signals. As a result, the reliable inspection of such materials is highly challenging. In this paper, we present a method for reliable ultrasonic non-destructive evaluation (NDE) of coarse grain materials using matched filters, where the filter is designed to approximate and match the unknown defect echoes, and a particle swarm optimization (PSO) paradigm is employed to search for the optimal parameters in the filter response with an objective to maximise the output signal-to-noise ratio (SNR). Experiments with a 128-element 5MHz transducer array on mild steel and INCONEL Alloy 617 samples are conducted, and the results confirm that the SNR of the images is improved by about 10-20 dB if the optimized matched filter is applied to all the A-scan waveforms prior to image formation. Furthermore, the matched filter can be implemented in real-time with low extra computational cost.

  13. Optimization of high speed pipelining in FPGA-based FIR filter design using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Botella, Guillermo; Romero, David E. T.; Kumm, Martin

    2012-06-01

    This paper compares FPGA-based full pipelined multiplierless FIR filter design options. Comparison of Distributed Arithmetic (DA), Common Sub-Expression (CSE) sharing and n-dimensional Reduced Adder Graph (RAG-n) multiplierless filter design methods in term of size, speed, and A*T product are provided. Since DA designs are table-based and CSE/RAG-n designs are adder-based, FPGA synthesis design data are used for a realistic comparison. Superior results of a genetic algorithm based optimization of pipeline registers and non-output fundamental coefficients are shown. FIR filters (posted as open source by Kastner et al.) for filters in the length from 6 to 151 coefficients are used.

  14. Preparation and optimization of the laser thin film filter

    NASA Astrophysics Data System (ADS)

    Su, Jun-hong; Wang, Wei; Xu, Jun-qi; Cheng, Yao-jin; Wang, Tao

    2014-08-01

    A co-colored thin film device for laser-induced damage threshold test system is presented in this paper, to make the laser-induced damage threshold tester operating at 532nm and 1064nm band. Through TFC simulation software, a film system of high-reflection, high -transmittance, resistance to laser damage membrane is designed and optimized. Using thermal evaporation technique to plate film, the optical properties of the coating and performance of the laser-induced damage are tested, and the reflectance and transmittance and damage threshold are measured. The results show that, the measured parameters, the reflectance R >= 98%@532nm, the transmittance T >= 98%@1064nm, the laser-induced damage threshold LIDT >= 4.5J/cm2 , meet the design requirements, which lays the foundation of achieving laser-induced damage threshold multifunction tester.

  15. Performance optimization of total momentum filtering double-resonance energy selective electron heat pump

    NASA Astrophysics Data System (ADS)

    Ding, Ze-Min; Chen, Lin-Gen; Ge, Yan-Lin; Sun, Feng-Rui

    2016-04-01

    A theoretical model for energy selective electron (ESE) heat pumps operating with two-dimensional electron reservoirs is established in this study. In this model, a double-resonance energy filter operating with a total momentum filtering mechanism is considered for the transmission of electrons. The optimal thermodynamic performance of the ESE heat pump devices is also investigated. Numerical calculations show that the heating load of the device with two resonances is larger, whereas the coefficient of performance (COP) is lower than the ESE heat pump when considering a single-resonance filter. The performance characteristics of the ESE heat pumps in the total momentum filtering condition are generally superior to those with a conventional filtering mechanism. In particular, the performance characteristics of the ESE heat pumps considering a conventional filtering mechanism are vastly different from those of a device with total momentum filtering, which is induced by extra electron momentum in addition to the horizontal direction. Parameters such as resonance width and energy spacing are found to be associated with the performance of the electron system.

  16. An optimal target-filter system for electron beam generated x-ray spectra

    SciTech Connect

    Hsu, Hsiao-Hua; Vasilik, D.G.; Chen, J.

    1994-04-01

    An electron beam generated x-ray spectrum consists of characteristic x rays of the target and continuous bremsstrahlung. The percentage of characteristic x rays over the entire energy spectrum depends on the beam energy and the filter thickness. To determine the optimal electron beam energy and filter thickness, one can either conduct many experimental measurements, or perform a series of Monte Carlo simulations. Monte Carlo simulations are shown to be an efficient tool for determining the optimal target-filter system for electron beam generated x-ray spectra. Three of the most commonly used low-energy x-ray metal targets (Cu, Zn and Mo) are chosen for this study to illustrate the power of Monte Carlo simulations.

  17. Plate/shell topological optimization subjected to linear buckling constraints by adopting composite exponential filtering function

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang

    2016-08-01

    In this paper, a model of topology optimization with linear buckling constraints is established based on an independent and continuous mapping method to minimize the plate/shell structure weight. A composite exponential function (CEF) is selected as filtering functions for element weight, the element stiffness matrix and the element geometric stiffness matrix, which recognize the design variables, and to implement the changing process of design variables from "discrete" to "continuous" and back to "discrete". The buckling constraints are approximated as explicit formulations based on the Taylor expansion and the filtering function. The optimization model is transformed to dual programming and solved by the dual sequence quadratic programming algorithm. Finally, three numerical examples with power function and CEF as filter function are analyzed and discussed to demonstrate the feasibility and efficiency of the proposed method.

  18. Optimized split-step method for modeling nonlinear pulse propagation in fiber Bragg gratings

    SciTech Connect

    Toroker, Zeev; Horowitz, Moshe

    2008-03-15

    We present an optimized split-step method for solving nonlinear coupled-mode equations that model wave propagation in nonlinear fiber Bragg gratings. By separately controlling the spatial and the temporal step size of the solution, we could significantly decrease the run time duration without significantly affecting the result accuracy. The accuracy of the method and the dependence of the error on the algorithm parameters are studied in several examples. Physical considerations are given to determine the required resolution.

  19. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  20. Decoupled Control Strategy of Grid Interactive Inverter System with Optimal LCL Filter Design

    NASA Astrophysics Data System (ADS)

    Babu, B. Chitti; Anurag, Anup; Sowmya, Tontepu; Marandi, Debati; Bal, Satarupa

    2013-09-01

    This article presents a control strategy for a three-phase grid interactive voltage source inverter that links a renewable energy source to the utility grid through a LCL-type filter. An optimized LCL-type filter has been designed and modeled so as to reduce the current harmonics in the grid, considering the conduction and switching losses at constant modulation index (Ma). The control strategy adopted here decouples the active and reactive power loops, thus achieving desirable performance with independent control of active and reactive power injected into the grid. The startup transients can also be controlled by the implementation of this proposed control strategy: in addition to this, optimal LCL filter with lesser conduction and switching copper losses as well as core losses. A trade-off has been made between the total losses in the LCL filter and the Total Harmonic Distortion (THD%) of the grid current, and the filter inductor has been designed accordingly. In order to study the dynamic performance of the system and to confirm the analytical results, the models are simulated in the MATLAB/Simulink environment, and the results are analyzed.

  1. Design Optimization of Vena Cava Filters: An application to dual filtration devices

    SciTech Connect

    Singer, M A; Wang, S L; Diachin, D P

    2009-12-03

    Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.

  2. Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft

    NASA Astrophysics Data System (ADS)

    Rasotto, M.; Armellin, R.; Di Lizia, P.

    2016-03-01

    An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.

  3. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  4. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  5. An optimal linear filter for the reduction of noise superimposed to the EEG signal.

    PubMed

    Bartoli, F; Cerutti, S

    1983-10-01

    In the present paper a procedure for the reduction of super-imposed noise on EEG tracings is described, which makes use of linear digital filtering and identification methods. In particular, an optimal filter (a Kalman filter) has been developed which is intended to capture the disturbances of the electromyographic noise on the basis of an a priori modelling which considers a series of impulses with a temporal occurrence according to a Poisson distribution as a noise generating mechanism. The experimental results refer to the EEG tracings recorded from 20 patients in normal resting conditions: the procedure consists of a preprocessing phase (which uses also a low-pass FIR digital filter), followed by the implementation of the identification and the Kalman filter. The performance of the filters is satisfactory also from the clinical standpoint, obtaining a marked reduction of noise without distorting the useful information contained in the signal. Furthermore, when using the introduced method, the EEG signal generating mechanism is accordingly parametrized as AR/ARMA models, thus obtaining an extremely sensitive feature extraction with interesting and not yet completely studied pathophysiological meanings. The above procedure may find a general application in the field of noise reduction and the better enhancement of information contained in the wide set of biological signals. PMID:6632838

  6. Optimal design of 2D digital filters based on neural networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-hua; He, Yi-gang; Zheng, Zhe-zhao; Zhang, Xu-hong

    2005-02-01

    Two-dimensional (2-D) digital filters are widely useful in image processing and other 2-D digital signal processing fields,but designing 2-D filters is much more difficult than designing one-dimensional (1-D) ones.In this paper, a new design approach for designing linear-phase 2-D digital filters is described,which is based on a new neural networks algorithm (NNA).By using the symmetry of the given 2-D magnitude specification,a compact express for the magnitude response of a linear-phase 2-D finite impulse response (FIR) filter is derived.Consequently,the optimal problem of designing linear-phase 2-D FIR digital filters is turned to approximate the desired 2-D magnitude response by using the compact express.To solve the problem,a new NNA is presented based on minimizing the mean-squared error,and the convergence theorem is presented and proved to ensure the designed 2-D filter stable.Three design examples are also given to illustrate the effectiveness of the NNA-based design approach.

  7. Hair enhancement in dermoscopic images using dual-channel quaternion tubularness filters and MRF-based multilabel optimization.

    PubMed

    Mirzaalian, Hengameh; Lee, Tim K; Hamarneh, Ghassan

    2014-12-01

    Hair occlusion is one of the main challenges facing automatic lesion segmentation and feature extraction for skin cancer applications. We propose a novel method for simultaneously enhancing both light and dark hairs with variable widths, from dermoscopic images, without the prior knowledge of the hair color. We measure hair tubularness using a quaternion color curvature filter. We extract optimal hair features (tubularness, scale, and orientation) using Markov random field theory and multilabel optimization. We also develop a novel dual-channel matched filter to enhance hair pixels in the dermoscopic images while suppressing irrelevant skin pixels. We evaluate the hair enhancement capabilities of our method on hair-occluded images generated via our new hair simulation algorithm. Since hair enhancement is an intermediate step in a computer-aided diagnosis system for analyzing dermoscopic images, we validate our method and compare it to other methods by studying its effect on: 1) hair segmentation accuracy; 2) image inpainting quality; and 3) image classification accuracy. The validation results on 40 real clinical dermoscopic images and 94 synthetic data demonstrate that our approach outperforms competing hair enhancement methods. PMID:25312927

  8. Global localization of 3D anatomical structures by pre-filtered Hough forests and discrete optimization.

    PubMed

    Donner, René; Menze, Bjoern H; Bischof, Horst; Langs, Georg

    2013-12-01

    The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates' weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450

  9. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method.

    PubMed

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-12-01

    In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8-2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75-150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  10. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method

    PubMed Central

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-01-01

    ABSTRACT In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8–2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75–150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  11. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  12. Two-step fringe pattern analysis with a Gabor filter bank

    NASA Astrophysics Data System (ADS)

    Rivera, Mariano; Dalmau, Oscar; Gonzalez, Adonai; Hernandez-Lopez, Francisco

    2016-10-01

    We propose a two-shot fringe analysis method for Fringe Patterns (FPs) with random phase-shift and changes in illumination components. These conditions reduce the acquisition time and simplify the experimental setup. Our method builds upon a Gabor Filter (GF) bank that eliminates noise and estimates the phase from the FPs. The GF bank allows us to obtain two phase maps with a sign ambiguity between them. Due to the fact that the random sign map is common to both computed phases, we can correct the sign ambiguity. We estimate a local phase-shift from the absolute wrapped residual between the estimated phases. Next, we robustly compute the global phase-shift. In order to unwrap the phase, we propose a robust procedure that interpolates unreliable phase regions obtained after applying the GF bank. We present numerical experiments that demonstrate the performance of our method.

  13. A two-step crushed lava rock filter unit for grey water treatment at household level in an urban slum.

    PubMed

    Katukiza, A Y; Ronteltap, M; Niwagaba, C B; Kansiime, F; Lens, P N L

    2014-01-15

    Decentralised grey water treatment in urban slums using low-cost and robust technologies offers opportunities to minimise public health risks and to reduce environmental pollution caused by the highly polluted grey water i.e. with a COD and N concentration of 3000-6000 mg L(-1) and 30-40 mg L(-1), respectively. However, there has been very limited action research to reduce the pollution load from uncontrolled grey water discharge by households in urban slums. This study was therefore carried out to investigate the potential of a two-step filtration process to reduce the grey water pollution load in an urban slum using a crushed lava rock filter, to determine the main filter design and operation parameters and the effect of intermittent flow on the grey water effluent quality. A two-step crushed lava rock filter unit was designed and implemented for use by a household in the Bwaise III slum in Kampala city (Uganda). It was monitored at a varying hydraulic loading rate (HLR) of 0.5-1.1 m d(-1) as well as at a constant HLR of 0.39 m d(-1). The removal efficiencies of COD, TP and TKN were, respectively, 85.9%, 58% and 65.5% under a varying HLR and 90.5%, 59.5% and 69%, when operating at a constant HLR regime. In addition, the log removal of Escherichia coli, Salmonella spp. and total coliforms was, respectively, 3.8, 3.2 and 3.9 under the varying HLR and 3.9, 3.5 and 3.9 at a constant HLR. The results show that the use of a two-step filtration process as well as a lower constant HLR increased the pollutant removal efficiencies. Further research is needed to investigate the feasibility of adding a tertiary treatment step to increase the nutrients and microorganisms removal from grey water. PMID:24388927

  14. Design and optimization of a harmonic probe with step cross section in multifrequency atomic force microscopy.

    PubMed

    Cai, Jiandong; Wang, Michael Yu; Zhang, Li

    2015-12-01

    In multifrequency atomic force microscopy (AFM), probe's characteristic of assigning resonance frequencies to integer harmonics results in a remarkable improvement of detection sensitivity at specific harmonic components. The selection criterion of harmonic order is based on its amplitude's sensitivity on material properties, e.g., elasticity. Previous studies on designing harmonic probe are unable to provide a large design capability along with maintaining the structural integrity. Herein, we propose a harmonic probe with step cross section, in which it has variable width in top and bottom steps, while the middle step in cross section is kept constant. Higher order resonance frequencies are tailored to be integer times of fundamental resonance frequency. The probe design is implemented within a structural optimization framework. The optimally designed probe is micromachined using focused ion beam milling technique, and then measured with an AFM. The measurement results agree well with our resonance frequency assignment requirement. PMID:26724066

  15. Design and optimization of a harmonic probe with step cross section in multifrequency atomic force microscopy

    SciTech Connect

    Cai, Jiandong; Zhang, Li; Wang, Michael Yu

    2015-12-15

    In multifrequency atomic force microscopy (AFM), probe’s characteristic of assigning resonance frequencies to integer harmonics results in a remarkable improvement of detection sensitivity at specific harmonic components. The selection criterion of harmonic order is based on its amplitude’s sensitivity on material properties, e.g., elasticity. Previous studies on designing harmonic probe are unable to provide a large design capability along with maintaining the structural integrity. Herein, we propose a harmonic probe with step cross section, in which it has variable width in top and bottom steps, while the middle step in cross section is kept constant. Higher order resonance frequencies are tailored to be integer times of fundamental resonance frequency. The probe design is implemented within a structural optimization framework. The optimally designed probe is micromachined using focused ion beam milling technique, and then measured with an AFM. The measurement results agree well with our resonance frequency assignment requirement.

  16. Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit

    NASA Astrophysics Data System (ADS)

    Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping

    2006-05-01

    Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss.

  17. Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit.

    PubMed

    Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping

    2006-05-29

    Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss. PMID:19516623

  18. [Optimization of one-step pelletization technology of Jiuwei Xifeng granules by response surface methodology].

    PubMed

    Wang, Xiu-hai; Yang, Xu-fang; Fan, Ye-wen; Zhang, Yan-jun; Xu, Zhong-kun; Yang, Lin-yong; Wang, Zhen-zhong; Xiao, Wei

    2014-12-01

    Using the qualified rates of particles as the evaluation indexes, the impact tactors of one-step pelletization technology of Jiuwei Xifeng granules were selected from six factors by the Plackett-Burman experimental design and the levels of non-significant factors were identified. According to the Plackett-Burman experimental design, choosing the qualified rates of particles and angle of repose as the evaluation indexes, three levels of the three factors were selected by Box-Behnken of central composite design to optimize the experimental. The best conditions were as follows: the fluid extract was sprayed with frequency of 29 r . min-1, inlet air temperature was 90 °C, the frequency of fan was 34 Hz. Under the response surface methodology optimized scheme, the average experimental results are similar to the predicted values, and surface methodology could be used in the optimization of one-step pelletization for Chinese materia medica. PMID:25898578

  19. Design and optimization of stepped austempered ductile iron using characterization techniques

    SciTech Connect

    Hernández-Rivera, J.L.; Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J.

    2013-09-15

    Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.

  20. AFM tip characterization by using FFT filtered images of step structures.

    PubMed

    Yan, Yongda; Xue, Bo; Hu, Zhenjiang; Zhao, Xuesen

    2016-01-01

    The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods. PMID:26517548

  1. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  2. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  3. Digital restoration of indium-111 and iodine-123 SPECT images with optimized Metz filters

    SciTech Connect

    King, M.A.; Schwinger, R.B.; Penney, B.C.; Doherty, P.W.; Bianco, J.A.

    1986-08-01

    A number of radiopharmaceuticals of great current clinical interest for imaging are labeled with radionuclides that emit medium- to high-energy photons either as their primary radiation, or in low abundance in addition to their primary radiation. The imaging characteristics of these radionuclides result in gamma camera image quality that is inferior to that of /sup 99m/Tc images. Thus, in this investigation /sup 111/In and /sup 123/I contaminated with approximately 4% /sup 124/I were chosen to test the hypothesis that a dramatic improvement in planar and SPECT images may be obtainable with digital image restoration. The count-dependent Metz filter is shown to be able to deconvolve the rapid drop at low spatial frequencies in the imaging system modulation transfer function (MTF) resulting from the acceptance of septal penetration and scatter in the camera window. Use of the Metz filter was found to result in improved spatial resolution as measured by both the full width at half maximum and full width at tenth maximum for both planar and SPECT studies. Two-dimensional, prereconstruction filtering with optimized Metz filters was also determined to improve image contrast, while decreasing the noise level for SPECT studies. A dramatic improvement in image quality was observed with the clinical application of this filter to SPECT imaging.

  4. Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1992-01-01

    Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

  5. Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Qiu, Ping

    2015-05-01

    Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration.

  6. Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy.

    PubMed

    Wang, Ke; Qiu, Ping

    2015-05-01

    Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration. PMID:25950644

  7. Optimal color filter array design: quantitative conditions and an efficient search procedure

    NASA Astrophysics Data System (ADS)

    Lu, Yue M.; Vetterli, Martin

    2009-01-01

    Most digital cameras employ a spatial subsampling process, implemented as a color filter array (CFA), to capture color images. The choice of CFA patterns has a great impact on the performance of subsequent reconstruction (demosaicking) algorithms. In this work, we propose a quantitative theory for optimal CFA design. We view the CFA sampling process as an encoding (low-dimensional approximation) operation and, correspondingly, demosaicking as the best decoding (reconstruction) operation. Finding the optimal CFA is thus equivalent to finding the optimal approximation scheme for the original signals with minimum information loss. We present several quantitative conditions for optimal CFA design, and propose an efficient computational procedure to search for the best CFAs that satisfy these conditions. Numerical experiments show that the optimal CFA patterns designed from the proposed procedure can effectively retain the information of the original full-color images. In particular, with the designed CFA patterns, high quality demosaicking can be achieved by using simple and efficient linear filtering operations in the polyphase domain. The visual qualities of the reconstructed images are competitive to those obtained by the state-of-the-art adaptive demosaicking algorithms based on the Bayer pattern.

  8. Creation of an iOS and Android Mobile Application for Inferior Vena Cava (IVC) Filters: A Powerful Tool to Optimize Care of Patients with IVC Filters.

    PubMed

    Deso, Steven E; Idakoji, Ibrahim A; Muelly, Michael C; Kuo, William T

    2016-06-01

    Owing to a myriad of inferior vena cava (IVC) filter types and their potential complications, rapid and correct identification may be challenging when encountered on routine imaging. The authors aimed to develop an interactive mobile application that allows recognition of all IVC filters and related complications, to optimize the care of patients with indwelling IVC filters. The FDA Premarket Notification Database was queried from 1980 to 2014 to identify all IVC filter types in the United States. An electronic search was then performed on MEDLINE and the FDA MAUDE database to identify all reported complications associated with each device. High-resolution photos were taken of each filter type and corresponding computed tomographic and fluoroscopic images were obtained from an institutional review board-approved IVC filter registry. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code. The software was deployed using PhoneGap (Adobe, San Jose, CA), and the prototype was tested and refined. Twenty-three IVC filter types were identified for inclusion. Safety data from FDA MAUDE and 72 relevant peer-reviewed studies were acquired, and complication rates for each filter type were highlighted in the application. Digital photos, fluoroscopic images, and CT DICOM files were seamlessly incorporated. All data were succinctly organized electronically, and the software was successfully deployed into Android (Google, Mountain View, CA) and iOS (Apple, Cupertino, CA) platforms. A powerful electronic mobile application was successfully created to allow rapid identification of all IVC filter types and related complications. This application may be used to optimize the care of patients with IVC filters. PMID:27247483

  9. Optimization of ecosystem model parameters with different temporal variabilities using tower flux data and an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.

    2013-12-01

    Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that

  10. Implicit application of polynomial filters in a k-step Arnoldi method

    NASA Technical Reports Server (NTRS)

    Sorensen, D. C.

    1990-01-01

    The Arnoldi process is a well known technique for approximating a few eigenvalues and corresponding eigenvectors of a general square matrix. Numerical difficulties such as loss of orthogonality and assessment of the numerical quality of the approximations as well as a potential for unbounded growth in storage have limited the applicability of the method. These issues are addressed by fixing the number of steps in the Arnoldi process at a prescribed value k and then treating the residual vector as a function of the initial Arnoldi vector. This starting vector is then updated through an iterative scheme that is designed to force convergence of the residual to zero. The iterative scheme is shown to be a truncation of the standard implicitly shifted QR-iteration for dense problems and it avoids the need to explicitly restart the Arnoldi sequence. The main emphasis of this paper is on the derivation and analysis of this scheme. However, there are obvious ways to exploit parallelism through the matrix-vector operations that comprise the majority of the work in the algorithm. Preliminary computational results are given for a few problems on some parallel and vector computers.

  11. An optimized item-based collaborative filtering recommendation algorithm based on item genre prediction

    NASA Astrophysics Data System (ADS)

    Zhang, De-Jia

    2009-07-01

    With the fast development of Internet, many systems have emerged in e-commerce applications to support the product recommendation. Collaborative filtering is one of the most promising techniques in recommender systems, providing personalized recommendations to users based on their previously expressed preferences in the form of ratings and those of other similar users. In practice, with the adding of user and item scales, user-item ratings are becoming extremely sparsity and recommender systems utilizing traditional collaborative filtering are facing serious challenges. To address the issue, this paper presents an approach to compute item genre similarity, through mapping each item with a corresponding descriptive genre, and computing similarity between genres as similarity, then make basic predictions according to those similarities to lower sparsity of the user-item ratings. After that, item-based collaborative filtering steps are taken to generate predictions. Compared with previous methods, the presented collaborative filtering employs the item genre similarity can alleviate the sparsity issue in the recommender systems, and can improve accuracy of recommendation.

  12. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647

  13. Optimization of single-step tapering amplitude and energy detuning for high-gain FELs

    NASA Astrophysics Data System (ADS)

    Li, He-Ting; Jia, Qi-Ka

    2015-01-01

    We put forward a method to optimize the single-step tapering amplitude of undulator strength and initial energy tuning of electron beam to maximize the saturation power of high gain free-electron lasers (FELs), based on the physics of longitudinal electron beam phase space. Using the FEL simulation code GENESIS, we numerically demonstrate the accuracy of the estimations for parameters corresponding to the linac coherent light source and the Tesla test facility.

  14. Combining segment generation with direct step-and-shoot optimization in intensity-modulated radiation therapy

    SciTech Connect

    Carlsson, Fredrik

    2008-09-15

    A method for generating a sequence of intensity-modulated radiation therapy step-and-shoot plans with increasing number of segments is presented. The objectives are to generate high-quality plans with few, large and regular segments, and to make the planning process more intuitive. The proposed method combines segment generation with direct step-and-shoot optimization, where leaf positions and segment weights are optimized simultaneously. The segment generation is based on a column generation approach. The method is evaluated on a test suite consisting of five head-and-neck cases and five prostate cases, planned for delivery with an Elekta SLi accelerator. The adjustment of segment shapes by direct step-and-shoot optimization improves the plan quality compared to using fixed segment shapes. The improvement in plan quality when adding segments is larger for plans with few segments. Eventually, adding more segments contributes very little to the plan quality, but increases the plan complexity. Thus, the method provides a tool for controlling the number of segments and, indirectly, the delivery time. This can support the planner in finding a sound trade-off between plan quality and treatment complexity.

  15. Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario

    NASA Astrophysics Data System (ADS)

    Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.

    2009-12-01

    Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.

  16. Novel tools for stepping source brachytherapy treatment planning: Enhanced geometrical optimization and interactive inverse planning

    SciTech Connect

    Dinkla, Anna M. Laarse, Rob van der; Koedooder, Kees; Petra Kok, H.; Wieringen, Niek van; Pieters, Bradley R.; Bel, Arjan

    2015-01-15

    Purpose: Dose optimization for stepping source brachytherapy can nowadays be performed using automated inverse algorithms. Although much quicker than graphical optimization, an experienced treatment planner is required for both methods. With automated inverse algorithms, the procedure to achieve the desired dose distribution is often based on trial-and-error. Methods: A new approach for stepping source prostate brachytherapy treatment planning was developed as a quick and user-friendly alternative. This approach consists of the combined use of two novel tools: Enhanced geometrical optimization (EGO) and interactive inverse planning (IIP). EGO is an extended version of the common geometrical optimization method and is applied to create a dose distribution as homogeneous as possible. With the second tool, IIP, this dose distribution is tailored to a specific patient anatomy by interactively changing the highest and lowest dose on the contours. Results: The combined use of EGO–IIP was evaluated on 24 prostate cancer patients, by having an inexperienced user create treatment plans, compliant to clinical dose objectives. This user was able to create dose plans of 24 patients in an average time of 4.4 min/patient. An experienced treatment planner without extensive training in EGO–IIP also created 24 plans. The resulting dose-volume histogram parameters were comparable to the clinical plans and showed high conformance to clinical standards. Conclusions: Even for an inexperienced user, treatment planning with EGO–IIP for stepping source prostate brachytherapy is feasible as an alternative to current optimization algorithms, offering speed, simplicity for the user, and local control of the dose levels.

  17. Spatial join optimization among WFSs based on recursive partitioning and filtering rate estimation

    NASA Astrophysics Data System (ADS)

    Lan, Guiwen; Wu, Congcong; Shi, Guangyi; Chen, Qi; Yang, Zhao

    2015-12-01

    Spatial join among Web Feature Services (WFS) is time-consuming for most of non-candidate spatial objects may be encoded by GML and transferred to client side. In this paper, an optimization strategy is proposed to enhance performance of these joins by filtering non-candidate spatial objects as many as possible. By recursive partitioning, the data skew of sub-areas is facilitated to reduce data transmission using spatial semi-join. Moreover filtering rate is used to determine whether a spatial semi-join for a sub-area is profitable and choose a suitable execution plan for it. The experimental results show that the proposed strategy is feasible under most circumstances.

  18. Transdermal film-loaded finasteride microplates to enhance drug skin permeation: Two-step optimization study.

    PubMed

    Ahmed, Tarek A; El-Say, Khalid M

    2016-06-10

    The goal was to develop an optimized transdermal finasteride (FNS) film loaded with drug microplates (MIC), utilizing two-step optimization, to decrease the dosing schedule and inconsistency in gastrointestinal absorption. First; 3-level factorial design was implemented to prepare optimized FNS-MIC of minimum particle size. Second; Box-Behnken design matrix was used to develop optimized transdermal FNS-MIC film. Interaction among MIC components was studied using physicochemical characterization tools. Film components namely; hydroxypropyl methyl cellulose (X1), dimethyl sulfoxide (X2) and propylene glycol (X3) were optimized for their effects on the film thickness (Y1) and elongation percent (Y2), and for FNS steady state flux (Y3), permeability coefficient (Y4), and diffusion coefficient (Y5) following ex-vivo permeation through the rat skin. Morphological study of the optimized MIC and transdermal film was also investigated. Results revealed that stabilizer concentration and anti-solvent percent were significantly affecting MIC formulation. Optimized FNS-MIC of particle size 0.93μm was successfully prepared in which there was no interaction observed among their components. An enhancement in the aqueous solubility of FNS-MIC by more than 23% was achieved. All the studied variables, most of their interaction and quadratic effects were significantly affecting the studied variables (Y1-Y5). Morphological observation illustrated non-spherical, short rods, flakes like small plates that were homogeneously distributed in the optimized transdermal film. Ex-vivo study showed enhanced FNS permeation from film loaded MIC when compared to that contains pure drug. So, MIC is a successful technique to enhance aqueous solubility and skin permeation of poor water soluble drug especially when loaded into transdermal films. PMID:26993962

  19. Optimal discrete-time H∞/γ0 filtering and control under unknown covariances

    NASA Astrophysics Data System (ADS)

    Kogan, Mark M.

    2016-04-01

    New stochastic γ0 and mixed H∞/γ0 filtering and control problems for discrete-time systems under completely unknown covariances are introduced and solved. The performance measure γ0 is the worst-case steady-state averaged variance of the error signal in response to the stationary Gaussian white zero-mean disturbance with unknown covariance and identity variance. The performance measure H∞/γ0 is the worst-case power norm of the error signal in response to two input disturbances in different channels, one of which is the deterministic signal with a bounded energy and the other is the stationary Gaussian white zero-mean signal with a bounded variance provided the weighting sum of disturbance powers equals one. In this framework, it is possible to consider at the same time both deterministic and stochastic disturbances highlighting their mutual effects. Our main results provide the complete characterisations of the above performance measures in terms of linear matrix inequalities and therefore both the γ0 and H∞/γ0 optimal filters and controllers can be computed by convex programming. H∞/γ0 optimal solution is shown to be actually a trade-off between optimal solutions to the H∞ and γ0 problems for the corresponding channels.

  20. Optimized model of oriented-line-target detection using vertical and horizontal filters

    NASA Astrophysics Data System (ADS)

    Westland, Stephen; Foster, David H.

    1995-08-01

    A line-element target differing sufficiently in orientation from a background of line elements can be visually detected easily and quickly; orientation thresholds for such detection are lowest when the background elements are all vertical or all horizontal. A simple quantitative model of this performance was constructed from two classes of anisotropic filters, (2) nonlinear point transformation, and (3) estimation of a signal-to-noise ratio based on responses to images with and without a target. A Monte Carlo optimization procedure (simulated annealing) was used to determine the model parameter values required for providing an accurate description of psychophysical data on orientation increment thresholds.

  1. Facile, green and clean one-step synthesis of carbon dots from wool: Application as a sensor for glyphosate detection based on the inner filter effect.

    PubMed

    Wang, Long; Bi, Yidan; Hou, Juan; Li, Huiyu; Xu, Yuan; Wang, Bo; Ding, Hong; Ding, Lan

    2016-11-01

    In this work, we reported a green route for the fabrication of fluorescent carbon dots (CDs). Wool, a kind of nontoxic and natural raw material, was chosen as the precursor to prepare CDs via a one-step microwave-assisted pyrolysis process. Compared with previously reported methods for preparation of CDs based on biomass materials, this method was simple, facile and free of any additives, such as acids, bases, or salts, which avoid the complicated post-treatment process to purify the CDs. The CDs have a high quantum yield (16.3%) and their fluorescence could be quenched by silver nanoparticles (AgNPs) based on inner filter effect (IFE). The presence of glyphosate could induce the aggregation of AgNPs and thus result in the fluorescence recovery of the quenched CDs. Based on this phenomenon, we constructed a fluorescence system (CDs/AgNPs) for determination of glyphosate. Under the optimized conditions, the fluorescence intensity of the CDs/AgNPs system was proportional to the concentration of glyphosate in the range of 0.025-2.5μgmL(-1), with a detection limit of 12ngmL(-1). Furthermore, the established method has been successfully used for glyphosate detection in the cereal samples with satisfactory results. PMID:27591613

  2. Energetic optimization of ion conduction rate by the K+ selectivity filter

    NASA Astrophysics Data System (ADS)

    Morais-Cabral, João H.; Zhou, Yufeng; MacKinnon, Roderick

    2001-11-01

    The K+ selectivity filter catalyses the dehydration, transfer and rehydration of a K+ ion in about ten nanoseconds. This physical process is central to the production of electrical signals in biology. Here we show how nearly diffusion-limited rates are achieved, by analysing ion conduction and the corresponding crystallographic ion distribution in the selectivity filter of the KcsA K+ channel. Measurements with K+ and its slightly larger analogue, Rb+, lead us to conclude that the selectivity filter usually contains two K+ ions separated by one water molecule. The two ions move in a concerted fashion between two configurations, K+-water-K+-water (1,3 configuration) and water-K+-water-K+ (2,4 configuration), until a third ion enters, displacing the ion on the opposite side of the queue. For K+, the energy difference between the 1,3 and 2,4 configurations is close to zero, the condition of maximum conduction rate. The energetic balance between these configurations is a clear example of evolutionary optimization of protein function.

  3. Modified patch-based locally optimal Wiener method for interferometric SAR phase filtering

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing

    2016-04-01

    This paper presents a modified patch-based locally optimal Wiener (PLOW) method for interferometric synthetic aperture radar (InSAR) phase filtering. PLOW is a linear minimum mean squared error (LMMSE) estimator based on a Gaussian additive noise condition. It jointly estimates moments, including mean and covariance, using a non-local technique. By using similarities between image patches, this method can effectively filter noise while preserving details. When applied to InSAR phase filtering, three modifications are proposed based on spatial variant noise. First, pixels are adaptively clustered according to their coherence magnitudes. Second, rather than a global estimator, a locally adaptive estimator is used to estimate noise covariance. Third, using the coherence magnitudes as weights, the mean of each cluster is estimated, using a weighted mean to further reduce noise. The performance of the proposed method is experimentally verified using simulated and real data. The results of our study demonstrate that the proposed method is on par or better than the non-local interferometric SAR (NL-InSAR) method.

  4. Optimization of Signal Decomposition Matched Filtering (SDMF) for Improved Detection of Copy-Number Variations.

    PubMed

    Stamoulis, Catherine; Betensky, Rebecca A

    2016-01-01

    We aim to improve the performance of the previously proposed signal decomposition matched filtering (SDMF) method [26] for the detection of copy-number variations (CNV) in the human genome. Through simulations, we show that the modified SDMF is robust even at high noise levels and outperforms the original SDMF method, which indirectly depends on CNV frequency. Simulations are also used to develop a systematic approach for selecting relevant parameter thresholds in order to optimize sensitivity, specificity and computational efficiency. We apply the modified method to array CGH data from normal samples in the cancer genome atlas (TCGA) and compare detected CNVs to those estimated using circular binary segmentation (CBS) [19], a hidden Markov model (HMM)-based approach [11] and a subset of CNVs in the Database of Genomic Variants. We show that a substantial number of previously identified CNVs are detected by the optimized SDMF, which also outperforms the other two methods. PMID:27295643

  5. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    PubMed

    Suleimanov, Yury V; Green, William H

    2015-09-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes. PMID:26575920

  6. A simple procedure eliminating multiple optimization steps required in developing multiplex PCR reactions

    SciTech Connect

    Grondin, V.; Roskey, M.; Klinger, K.; Shuber, T.

    1994-09-01

    The PCR technique is one of the most powerful tools in modern molecular genetics and has achieved widespread use in the analysis of genetic diseases. Typically, a region of interest is amplified from genomic DNA or cDNA and examined by various methods of analysis for mutations or polymorphisms. In cases of small genes and transcripts, amplification of single, small regions of DNA are sufficient for analysis. However, when analyzing large genes and transcripts, multiple PCRs may be required to identify the specific mutation or polymorphism of interest. Ever since it has been shown that PCR could simultaneously amplify multiple loci in the human dystrophin gene, multiplex PCR has been established as a general technique. The properities of multiplex PCR make it a useful tool and preferable to simultaneous uniplex PCR in many instances. However, the steps for developing a multiplex PCR can be laborious, with significant difficulty in achieving equimolar amounts of several different amplicons. We have developed a simple method of primer design that has enabled us to eliminate a number of the standard optimization steps required in developing a multiplex PCR. Sequence-specific oligonucleotide pairs were synthesized for the simultaneous amplification of multiple exons within the CFTR gene. A common non-complementary 20 nucleotide sequence was attached to each primer, thus creating a mixture of primer pairs all containing a universal primer sequence. Multiplex PCR reactions were carried out containing target DNA, a mixture of several chimeric primer pairs and primers complementary to only the universal portion of the chimeric primers. Following optimization of conditions for the universal primer, limited optimization was needed for successful multiplex PCR. In contrast, significant optimization of the PCR conditions were needed when pairs of sequence specific primers were used together without the universal sequence.

  7. Towards Optimal Filtering on ARM for ATLAS Tile Calorimeter Front-End Processing

    NASA Astrophysics Data System (ADS)

    Cox, Mitchell A.

    2015-10-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. Advanced and characteristically expensive Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are currently used to process this quantity of data. It is proposed that a cost- effective, high data throughput Processing Unit (PU) can be developed by using several ARM System on Chips in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. ARM is a cost effective and energy efficient alternative CPU architecture to the long established x86 architecture. This PU could be used for a variety of high-level algorithms on the high data throughput raw data. An Optimal Filtering algorithm has been implemented in C++ and several ARM platforms have been tested. Optimal Filtering is currently used in the ATLAS Tile Calorimeter front-end for basic energy reconstruction and is currently implemented on DSPs.

  8. First laboratory demonstration of closed-loop Kalman based optimal control for vibration filtering and simplified MCAO

    NASA Astrophysics Data System (ADS)

    Petit, C.; Conan, J.-M.; Kulcsár, C.; Raynaud, H.-F.; Fusco, T.; Montri, J.; Rabaud, D.

    2006-06-01

    Classic Adaptive Optics (AO) is now successfully implemented on a growing number of ground-based imaging systems. Nevertheless some limitations are still to cope with. First, the AO standard control laws are unable to easily handle vibrations. In the particular case of eXtreme AO (XAO), which requires a highly efficient AO, these vibrations can thus be much penalizing. We have previously shown that a Kalman based control law can provide both an efficient correction of the turbulence and a strong vibration filtering. Second, anisoplanatism effects lead to a small corrected field of view. Multi-Conjugate AO (MCAO) is a promising concept that should increase significantly this field of view. We have shown numerically that MCAO correction can be highly improved by optimal control based on a Kalman filter. This article presents the first laboratory demonstration of these two concepts. We use a classic AO bench available at Onera with a deformable mirror (DM) in the pupil and a Shack-Hartmann Wave Front Sensor (WFS) pointing at an on-axis guide-star. The turbulence is produced by a rotating phase screen in altitude. First, this AO configuration is used to validate the ability of our control approach to filter out system vibrations and improve the overall performance of the AO closed-loop, compared to classic controllers. The consequences on the RTC design of an XAO system is discussed. Then, we optimize the correction for an off-axis star although the WFS still points at the on-axis star. This Off-Axis AO (OAAO) can be seen as a first step towards MCAO or Multi-Object AO in a simplified configuration. It proves the ability of our control law to estimate the turbulence in altitude and correct in the direction of interest. We describe the off-axis correction tests performed in a dynamic mode (closed-loop) using our Kalman based control. We present the evolution of the off-axis correction according to the angular separation between the stars. A highly significant

  9. Statistical efficiency and optimal design for stepped cluster studies under linear mixed effects models.

    PubMed

    Girling, Alan J; Hemming, Karla

    2016-06-15

    In stepped cluster designs the intervention is introduced into some (or all) clusters at different times and persists until the end of the study. Instances include traditional parallel cluster designs and the more recent stepped-wedge designs. We consider the precision offered by such designs under mixed-effects models with fixed time and random subject and cluster effects (including interactions with time), and explore the optimal choice of uptake times. The results apply both to cross-sectional studies where new subjects are observed at each time-point, and longitudinal studies with repeat observations on the same subjects. The efficiency of the design is expressed in terms of a 'cluster-mean correlation' which carries information about the dependency-structure of the data, and two design coefficients which reflect the pattern of uptake-times. In cross-sectional studies the cluster-mean correlation combines information about the cluster-size and the intra-cluster correlation coefficient. A formula is given for the 'design effect' in both cross-sectional and longitudinal studies. An algorithm for optimising the choice of uptake times is described and specific results obtained for the best balanced stepped designs. In large studies we show that the best design is a hybrid mixture of parallel and stepped-wedge components, with the proportion of stepped wedge clusters equal to the cluster-mean correlation. The impact of prior uncertainty in the cluster-mean correlation is considered by simulation. Some specific hybrid designs are proposed for consideration when the cluster-mean correlation cannot be reliably estimated, using a minimax principle to ensure acceptable performance across the whole range of unknown values. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:26748662

  10. Determination Method for Optimal Installation of Active Filters in Distribution Network with Distributed Generation

    NASA Astrophysics Data System (ADS)

    Kawasaki, Shoji; Hayashi, Yasuhiro; Matsuki, Junya; Kikuya, Hirotaka; Hojo, Masahide

    Recently, the harmonic troubles in a distribution network are worried in the background of the increase of the connection of distributed generation (DG) and the spread of the power electronics equipments. As one of the strategies, control the harmonic voltage by installing an active filter (AF) has been researched. In this paper, the authors propose a computation method to determine the optimal allocations, gains and installation number of AFs so as to minimize the maximum value of voltage total harmonic distortion (THD) for a distribution network with DGs. The developed method is based on particle swarm optimization (PSO) which is one of the nonlinear optimization methods. Especially, in this paper, the case where the harmonic voltage or the harmonic current in a distribution network is assumed by connecting many DGs through the inverters, and the authors propose a determination method of the optimal allocation and gain of AF that has the harmonic restrictive effect in the whole distribution network. Moreover, the authors propose also about a determination method of the necessary minimum installation number of AFs, by taking into consideration also about the case where the target value of harmonic suppression cannot be reached, by one set only of AF. In order to verify the validity and effectiveness of the proposed method, the numerical simulations are carried out by using an analytical model of distribution network with DGs.

  11. Rod-filter-field optimization of the J-PARC RF-driven H- ion source

    NASA Astrophysics Data System (ADS)

    Ueno, A.; Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2015-04-01

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H- ion beam of 60mA within normalized emittances of 1.5πmm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500μs×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H- ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H- ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM's gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H- ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM's cross-section (magnetmotive force) was indispensable for easy operation with the temperature (TPE) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM's cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for TPE around 60°C.

  12. Real-time defect detection of steel wire rods using wavelet filters optimized by univariate dynamic encoding algorithm for searches.

    PubMed

    Yun, Jong Pil; Jeon, Yong-Ju; Choi, Doo-chul; Kim, Sang Woo

    2012-05-01

    We propose a new defect detection algorithm for scale-covered steel wire rods. The algorithm incorporates an adaptive wavelet filter that is designed on the basis of lattice parameterization of orthogonal wavelet bases. This approach offers the opportunity to design orthogonal wavelet filters via optimization methods. To improve the performance and the flexibility of wavelet design, we propose the use of the undecimated discrete wavelet transform, and separate design of column and row wavelet filters but with a common cost function. The coefficients of the wavelet filters are optimized by the so-called univariate dynamic encoding algorithm for searches (uDEAS), which searches the minimum value of a cost function designed to maximize the energy difference between defects and background noise. Moreover, for improved detection accuracy, we propose an enhanced double-threshold method. Experimental results for steel wire rod surface images obtained from actual steel production lines show that the proposed algorithm is effective. PMID:22561939

  13. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  14. Ultra-Compact Broadband High-Spurious Suppression Bandpass Filter Using Double Split-end Stepped Impedance Resonators

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop; Wollack, Ed; Papapolymerou, John; Laskar, Joy

    2005-01-01

    We propose an ultra compact single-layer spurious suppression band pass filter design which has the following benefit: 1) Effective coupling area can be increased with no fabrication limitation and no effect on the spurious response; 2) Two fundamental poles are introduced to suppress spurs; 3) Filter can be designed with up to 30% bandwidth; 4) The Filter length is reduced by at least 100% when compared to the conventional filter; 5) Spurious modes are suppressed up to at the seven times the fundamental frequency; and 6) It uses only one layer of metallization which minimize the fabrication cost.

  15. Graphics-processor-unit-based parallelization of optimized baseline wander filtering algorithms for long-term electrocardiography.

    PubMed

    Niederhauser, Thomas; Wyss-Balmer, Thomas; Haeberlin, Andreas; Marisa, Thanks; Wildhaber, Reto A; Goette, Josef; Jacomet, Marcel; Vogel, Rolf

    2015-06-01

    Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here, we present a graphics processor unit (GPU)-based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to autoregressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and four times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a seven-day high-resolution ECG is computed within less than 3 s. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced. PMID:25675449

  16. Optimized design of high-order series coupler Yb3+/Er3+ codoped phosphate glass microring resonator filters

    NASA Astrophysics Data System (ADS)

    Galatus, Ramona; Valles, Juan

    2016-04-01

    The optimized geometry based on high-order active microring resonators (MRR) geometry is proposed. The solution possesses both the filtering and amplifying functions for the signal at around 1534nm (pump 976 nm). The cross-grid resonator with laterally, series-coupled triple-microrings, having 15.35μm radius, in a co-propagation topology between signal and pump, is the structure under analysis (commonly termed an add-drop filter).

  17. Optimization of spectral filtering parameters of acousto-optic pure rotational Raman lidar for atmospheric temperature profiling

    NASA Astrophysics Data System (ADS)

    Zhu, Jianhua; Wan, Lei; Nie, Guosheng; Guo, Xiaowei

    2003-12-01

    In this paper, as far as we know, it is the first time that a novel acousto-optic pure rotational Raman lidar based on acousto-optic tunable filter (AOTF) is put forward for the application of atmospheric temperature measurements. AOTF is employed in the novel lidar system as narrow band-pass filter and high-speed single-channel wavelength scanner. This new acousto-optic filtering technique can solve the problems of conventional pure rotational Raman lidar, e.g., low temperature detection sensitivity, untunability of filtering parameters, and signal interference between different detection channels. This paper will focus on the PRRS physical model calculation and simulation optimization of system parameters such as the central wavelengths and the bandwidths of filtering operation, and the required sensitivity. The theoretical calculations and optimization of AOTF spectral filtering parameters are conducted to achieve high temperature dependence and sensitivity, high signal intensities, high temperature of filtered spectral passbands, and adequate blocking of elastic Mie and Rayleigh scattering signals. The simulation results can provide suitable proposal and theroetical evaluation before the integration of a practical Raman lidar system.

  18. Reliably Detecting Clinically Important Variants Requires Both Combined Variant Calls and Optimized Filtering Strategies.

    PubMed

    Field, Matthew A; Cho, Vicky; Andrews, T Daniel; Goodnow, Chris C

    2015-01-01

    A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality 'genome in a bottle' reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436

  19. Reliably Detecting Clinically Important Variants Requires Both Combined Variant Calls and Optimized Filtering Strategies

    PubMed Central

    Field, Matthew A.; Cho, Vicky

    2015-01-01

    A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality ‘genome in a bottle’ reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436

  20. A general sequential Monte Carlo method based optimal wavelet filter: A Bayesian approach for extracting bearing fault features

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Sun, Shilong; Tse, Peter W.

    2015-02-01

    A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.

  1. Application of digital tomosynthesis (DTS) of optimal deblurring filters for dental X-ray imaging

    NASA Astrophysics Data System (ADS)

    Oh, J. E.; Cho, H. S.; Kim, D. S.; Choi, S. I.; Je, U. K.

    2012-04-01

    Digital tomosynthesis (DTS) is a limited-angle tomographic technique that provides some of the tomographic benefits of computed tomography (CT) but at reduced dose and cost. Thus, the potential for application of DTS to dental X-ray imaging seems promising. As a continuation of our dental radiography R&D, we developed an effective DTS reconstruction algorithm and implemented it in conjunction with a commercial dental CT system for potential use in dental implant placement. The reconstruction algorithm employed a backprojection filtering (BPF) method based upon optimal deblurring filters to suppress effectively both the blur artifacts originating from the out-focus planes and the high-frequency noise. To verify the usefulness of the reconstruction algorithm, we performed systematic simulation works and evaluated the image characteristics. We also performed experimental works in which DTS images of enhanced anatomical resolution were successfully obtained by using the algorithm and were promising to our ongoing applications to dental X-ray imaging. In this paper, our approach to the development of the DTS reconstruction algorithm and the results are described in detail.

  2. Theoretical optimal modulation frequencies for scattering parameter estimation and ballistic photon filtering in diffusing media.

    PubMed

    Panigrahi, Swapnesh; Fade, Julien; Ramachandran, Hema; Alouini, Mehdi

    2016-07-11

    The efficiency of using intensity modulated light for the estimation of scattering properties of a turbid medium and for ballistic photon discrimination is theoretically quantified in this article. Using the diffusion model for modulated photon transport and considering a noisy quadrature demodulation scheme, the minimum-variance bounds on estimation of parameters of interest are analytically derived and analyzed. The existence of a variance-minimizing optimal modulation frequency is shown and its evolution with the properties of the intervening medium is derived and studied. Furthermore, a metric is defined to quantify the efficiency of ballistic photon filtering which may be sought when imaging through turbid media. The analytical derivation of this metric shows that the minimum modulation frequency required to attain significant ballistic discrimination depends only on the reduced scattering coefficient of the medium in a linear fashion for a highly scattering medium. PMID:27410875

  3. Convex optimization-based windowed Fourier filtering with multiple windows for wrapped-phase denoising.

    PubMed

    Yatabe, Kohei; Oikawa, Yasuhiro

    2016-06-10

    The windowed Fourier filtering (WFF), defined as a thresholding operation in the windowed Fourier transform (WFT) domain, is a successful method for denoising a phase map and analyzing a fringe pattern. However, it has some shortcomings, such as extremely high redundancy, which results in high computational cost, and difficulty in selecting an appropriate window size. In this paper, an extension of WFF for denoising a wrapped-phase map is proposed. It is formulated as a convex optimization problem using Gabor frames instead of WFT. Two Gabor frames with differently sized windows are used simultaneously so that the above-mentioned issues are resolved. In addition, a differential operator is combined with a Gabor frame in order to preserve discontinuity of the underlying phase map better. Some numerical experiments demonstrate that the proposed method is able to reconstruct a wrapped-phase map, even for a severely contaminated situation. PMID:27409020

  4. An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter

    NASA Astrophysics Data System (ADS)

    Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning

    2015-08-01

    An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/με.

  5. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  6. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: Validation on phantoms

    SciTech Connect

    Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen

    2009-01-15

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  7. Optimal synthesis of double-phase computer generated holograms using a phase-only spatial light modulator with grating filter.

    PubMed

    Song, Hoon; Sung, Geeyoung; Choi, Sujin; Won, Kanghee; Lee, Hong-Seok; Kim, Hwi

    2012-12-31

    We propose an optical system for synthesizing double-phase complex computer-generated holograms using a phase-only spatial light modulator and a phase grating filter. Two separated areas of the phase-only spatial light modulator are optically superposed by 4-f configuration with an optimally designed grating filter to synthesize arbitrary complex optical field distributions. The tolerances related to misalignment factors are analyzed, and the optimal synthesis method of double-phase computer-generated holograms is described. PMID:23388811

  8. Filter-feeding and cruising swimming speeds of basking sharks compared with optimal models: they filter-feed slower than predicted for their size.

    PubMed

    Sims

    2000-06-01

    Movements of six basking sharks (4.0-6.5 m total body length, L(T)) swimming at the surface were tracked and horizontal velocities determined. Sharks were tracked for between 1.8 and 55 min with between 4 and 21 mean speed determinations per shark track. The mean filter-feeding swimming speed was 0.85 m s(-1) (+/-0.05 S.E., n=49 determinations) compared to the non-feeding (cruising) mean speed of 1.08 m s(-1) (+/-0.03 S.E., n=21 determinations). Both absolute (m s(-1)) and specific (L s(-1)) swimming speeds during filter-feeding were significantly lower than when cruise swimming with the mouth closed, indicating basking sharks select speeds approximately 24% lower when engaged in filter-feeding. This reduction in speed during filter-feeding could be a behavioural response to avoid increased drag-induced energy costs associated with feeding at higher speeds. Non-feeding basking sharks (4 m L(T)) cruised at speeds close to, but slightly faster ( approximately 18%) than the optimum speed predicted by the Weihs (1977) [Weihs, D., 1977. Effects of size on the sustained swimming speeds of aquatic organisms. In: Pedley, T.J. (Ed.), Scale Effects in Animal Locomotion. Academic Press, London, pp. 333-338.] optimal cruising speed model. In contrast, filter-feeding basking sharks swam between 29 and 39% slower than the speed predicted by the Weihs and Webb (1983) [Weihs, D., Webb, P.W., 1983. Optimization of locomotion. In: Webb, P.W., Weihs, D. (Eds.), Fish Biomechanics. Praeger, New York, pp. 339-371.] optimal filter-feeding model. This significant under-estimation in observed feeding speed compared to model predictions was most likely accounted for by surface drag effects reducing optimum speeds of tracked sharks, together with inaccurate parameter estimates used in the general model to predict optimal speeds of basking sharks from body size extrapolations. PMID:10817828

  9. Optimization of a one-step heat-inducible in vivo mini DNA vector production system.

    PubMed

    Nafissi, Nafiseh; Sum, Chi Hong; Wettig, Shawn; Slavcev, Roderick A

    2014-01-01

    While safer than their viral counterparts, conventional circular covalently closed (CCC) plasmid DNA vectors offer a limited safety profile. They often result in the transfer of unwanted prokaryotic sequences, antibiotic resistance genes, and bacterial origins of replication that may lead to unwanted immunostimulatory responses. Furthermore, such vectors may impart the potential for chromosomal integration, thus potentiating oncogenesis. Linear covalently closed (LCC), bacterial sequence free DNA vectors have shown promising clinical improvements in vitro and in vivo. However, the generation of such minivectors has been limited by in vitro enzymatic reactions hindering their downstream application in clinical trials. We previously characterized an in vivo temperature-inducible expression system, governed by the phage λ pL promoter and regulated by the thermolabile λ CI[Ts]857 repressor to produce recombinant protelomerase enzymes in E. coli. In this expression system, induction of recombinant protelomerase was achieved by increasing culture temperature above the 37°C threshold temperature. Overexpression of protelomerase led to enzymatic reactions, acting on genetically engineered multi-target sites called "Super Sequences" that serve to convert conventional CCC plasmid DNA into LCC DNA minivectors. Temperature up-shift, however, can result in intracellular stress responses and may alter plasmid replication rates; both of which may be detrimental to LCC minivector production. We sought to optimize our one-step in vivo DNA minivector production system under various induction schedules in combination with genetic modifications influencing plasmid replication, processing rates, and cellular heat stress responses. We assessed different culture growth techniques, growth media compositions, heat induction scheduling and temperature, induction duration, post-induction temperature, and E. coli genetic background to improve the productivity and scalability of our system

  10. The first on-site evaluation of a new filter optimized for TARC and developer

    NASA Astrophysics Data System (ADS)

    Umeda, Toru; Ishibashi, Takeo; Nakamura, Atsushi; Ide, Junichi; Nagano, Masaru; Omura, Koichi; Tsuzuki, Shuichi; Numaguchi, Toru

    2008-11-01

    In previous studies, we identified filter properties that have a strong effect on microbubble formation on the downstream side of the filter membrane. A new Highly Asymmetric Polyarylsulfone (HAPAS) filter was developed based on the findings. In the current study, we evaluated newly-developed HAPAS filter in environmentally preferred non-PFOS TARC in a laboratory setting. Test results confirmed that microbubble counts downstream of the filter were lower than those of a conventional HDPE filter. Further testing in a manufacturing environment confirmed that HAPAS filtration of TARC at point of use was able to reduce defectivity caused by microbubbles on both unpatterned and patterned wafers, compared with a HDPE filter.

  11. Fast Automatic Step Size Estimation for Gradient Descent Optimization of Image Registration.

    PubMed

    Qiao, Yuchuan; van Lew, Baldur; Lelieveldt, Boudewijn P F; Staring, Marius

    2016-02-01

    Fast automatic image registration is an important prerequisite for image-guided clinical procedures. However, due to the large number of voxels in an image and the complexity of registration algorithms, this process is often very slow. Stochastic gradient descent is a powerful method to iteratively solve the registration problem, but relies for convergence on a proper selection of the optimization step size. This selection is difficult to perform manually, since it depends on the input data, similarity measure and transformation model. The Adaptive Stochastic Gradient Descent (ASGD) method is an automatic approach, but it comes at a high computational cost. In this paper, we propose a new computationally efficient method (fast ASGD) to automatically determine the step size for gradient descent methods, by considering the observed distribution of the voxel displacements between iterations. A relation between the step size and the expectation and variance of the observed distribution is derived. While ASGD has quadratic complexity with respect to the transformation parameters, fast ASGD only has linear complexity. Extensive validation has been performed on different datasets with different modalities, inter/intra subjects, different similarity measures and transformation models. For all experiments, we obtained similar accuracy as ASGD. Moreover, the estimation time of fast ASGD is reduced to a very small value, from 40 s to less than 1 s when the number of parameters is 105, almost 40 times faster. Depending on the registration settings, the total registration time is reduced by a factor of 2.5-7 × for the experiments in this paper. PMID:26353367

  12. Selecting the optimal anti-aliasing filter for multichannel biosignal acquisition intended for inter-signal phase shift analysis.

    PubMed

    Keresnyei, Róbert; Megyeri, Péter; Zidarics, Zoltán; Hejjel, László

    2015-01-01

    The availability of microcomputer-based portable devices facilitates the high-volume multichannel biosignal acquisition and the analysis of their instantaneous oscillations and inter-signal temporal correlations. These new, non-invasively obtained parameters can have considerable prognostic or diagnostic roles. The present study investigates the inherent signal delay of the obligatory anti-aliasing filters. One cycle of each of the 8 electrocardiogram (ECG) and 4 photoplethysmogram signals from healthy volunteers or artificially synthesised series were passed through 100-80-60-40-20 Hz 2-4-6-8th order Bessel and Butterworth filters digitally synthesized by bilinear transformation, that resulted in a negligible error in signal delay compared to the mathematical model of the impulse- and step responses of the filters. The investigated filters have as diverse a signal delay as 2-46 ms depending on the filter parameters and the signal slew rate, which is difficult to predict in biological systems and thus difficult to compensate for. Its magnitude can be comparable to the examined phase shifts, deteriorating the accuracy of the measurement. As a conclusion, identical or very similar anti-aliasing filters with lower orders and higher corner frequencies, oversampling, and digital low pass filtering are recommended for biosignal acquisition intended for inter-signal phase shift analysis. PMID:25514627

  13. Optimal spatial filtering for design of a conformal velocity sonar array

    NASA Astrophysics Data System (ADS)

    Traweek, Charles M.

    In stark contrast to the ubiquitous optimization problem posed in the array processing literature, tactical hull sonar arrays have traditionally been designed using extrapolations of low spatial resolution empirical self noise data, dominated by hull noise at moderate speeds, in conjunction with assumptions regarding achievable conventional beamformer sidelobe levels by so-called Taylor shading for a time domain, delay-and-sum beamformer. That ad hoc process defaults to an extremely conservative (expensive and heavy) design for an array baffle as a means to assure environmental noise limited sonar performance. As an alternative, this dissertation formulates, implements, and demonstrates an objective function that results from the expression of the log likelihood ratio of the optimal Bayesian detector as a comparison to a threshold. Its purpose is to maximize the deflection coefficient of a square-law energy detector over an arbitrarily specified frequency band by appropriate selection of array shading weights for the generalized conformal velocity sonar array under the assumption that it will employ the traditional time domain delay-and-sum beamformer. The restrictive assumptions that must be met in order to appropriately use the deflection coefficient as a performance metric are carefully delineated. A series of conformal velocity sonar array spatial filter optimization problems was defined using a data set characterized by spatially complex structural noise from a large aperture conformal velocity sonar array experiment. The detection performance of an 80-element cylindrical array was optimized over a reasonably broad range of frequencies (from k0a = 12.95 to k 0a = 15.56) for the cases of broadside and off-broadside signal incidence. In each case, performance of the array using optimal real-valued time domain delay-and-sum beamformer weights was much better than that achieved for either uniform shading or for Taylor shading. The result is an analytical engine

  14. Filtering for networked control systems with single/multiple measurement packets subject to multiple-step measurement delays and multiple packet dropouts

    NASA Astrophysics Data System (ADS)

    Moayedi, Maryam; Foo, Yung Kuan; Chai Soh, Yeng

    2011-03-01

    The minimum-variance filtering problem in networked control systems, where both random measurement transmission delays and packet dropouts may occur, is investigated in this article. Instead of following the many existing results that solve the problem by using probabilistic approaches based on the probabilities of the uncertainties occurring between the sensor and the filter, we propose a non-probabilistic approach by time-stamping the measurement packets. Both single-measurement and multiple measurement packets are studied. We also consider the case of burst arrivals, where more than one packet may arrive between the receiver's previous and current sampling times; the scenario where the control input is non-zero and subject to delays and packet dropouts is examined as well. It is shown that, in such a situation, the optimal state estimate would generally be dependent on the possible control input. Simulations are presented to demonstrate the performance of the various proposed filters.

  15. Compressive Bilateral Filtering.

    PubMed

    Sugimoto, Kenjiro; Kamata, Sei-Ichiro

    2015-11-01

    This paper presents an efficient constant-time bilateral filter that produces a near-optimal performance tradeoff between approximate accuracy and computational complexity without any complicated parameter adjustment, called a compressive bilateral filter (CBLF). The constant-time means that the computational complexity is independent of its filter window size. Although many existing constant-time bilateral filters have been proposed step-by-step to pursue a more efficient performance tradeoff, they have less focused on the optimal tradeoff for their own frameworks. It is important to discuss this question, because it can reveal whether or not a constant-time algorithm still has plenty room for improvements of performance tradeoff. This paper tackles the question from a viewpoint of compressibility and highlights the fact that state-of-the-art algorithms have not yet touched the optimal tradeoff. The CBLF achieves a near-optimal performance tradeoff by two key ideas: 1) an approximate Gaussian range kernel through Fourier analysis and 2) a period length optimization. Experiments demonstrate that the CBLF significantly outperforms state-of-the-art algorithms in terms of approximate accuracy, computational complexity, and usability. PMID:26068315

  16. A Compact Symmetric Microstrip Filter Based on a Rectangular Meandered-Line Stepped Impedance Resonator with a Triple-Band Bandstop Response

    PubMed Central

    Kim, Nam-Young

    2013-01-01

    This paper presents a symmetric-type microstrip triple-band bandstop filter incorporating a tri-section meandered-line stepped impedance resonator (SIR). The length of each section of the meandered line is 0.16, 0.15, and 0.83 times the guided wavelength (λg), so that the filter features three stop bands at 2.59 GHz, 6.88 GHz, and 10.67 GHz, respectively. Two symmetric SIRs are employed with a microstrip transmission line to obtain wide bandwidths of 1.12, 1.34, and 0.89 GHz at the corresponding stop bands. Furthermore, an equivalent circuit model of the proposed filter is developed, and the model matches the electromagnetic simulations well. The return losses of the fabricated filter are measured to be −29.90 dB, −28.29 dB, and −26.66 dB while the insertion losses are 0.40 dB, 0.90 dB, and 1.10 dB at the respective stop bands. A drastic reduction in the size of the filter was achieved by using a simplified architecture based on a meandered-line SIR. PMID:24319367

  17. SVD-based optimal filtering for noise reduction in dual microphone hearing aids: a real time implementation and perceptual evaluation.

    PubMed

    Maj, Jean-Baptiste; Royackers, Liesbeth; Moonen, Marc; Wouters, Jan

    2005-09-01

    In this paper, the first real-time implementation and perceptual evaluation of a singular value decomposition (SVD)-based optimal filtering technique for noise reduction in a dual microphone behind-the-ear (BTE) hearing aid is presented. This evaluation was carried out for a speech weighted noise and multitalker babble, for single and multiple jammer sound source scenarios. Two basic microphone configurations in the hearing aid were used. The SVD-based optimal filtering technique was compared against an adaptive beamformer, which is known to give significant improvements in speech intelligibility in noisy environment. The optimal filtering technique works without assumptions about a speaker position, unlike the two-stage adaptive beamformer. However this strategy needs a robust voice activity detector (VAD). A method to improve the performance of the VAD was presented and evaluated physically. By connecting the VAD to the output of the noise reduction algorithms, a good discrimination between the speech-and-noise periods and the noise-only periods of the signals was obtained. The perceptual experiments demonstrated that the SVD-based optimal filtering technique could perform as well as the adaptive beamformer in a single noise source scenario, i.e., the ideal scenario for the latter technique, and could outperform the adaptive beamformer in multiple noise source scenarios. PMID:16189969

  18. Optimization of leaf margins for lung stereotactic body radiotherapy using a flattening filter-free beam

    SciTech Connect

    Wakai, Nobuhide; Sumida, Iori; Otani, Yuki; Suzuki, Osamu; Seo, Yuji; Isohashi, Fumiaki; Yoshioka, Yasuo; Ogawa, Kazuhiko; Hasegawa, Masatoshi

    2015-05-15

    Purpose: The authors sought to determine the optimal collimator leaf margins which minimize normal tissue dose while achieving high conformity and to evaluate differences between the use of a flattening filter-free (FFF) beam and a flattening-filtered (FF) beam. Methods: Sixteen lung cancer patients scheduled for stereotactic body radiotherapy underwent treatment planning for a 7 MV FFF and a 6 MV FF beams to the planning target volume (PTV) with a range of leaf margins (−3 to 3 mm). Forty grays per four fractions were prescribed as a PTV D95. For PTV, the heterogeneity index (HI), conformity index, modified gradient index (GI), defined as the 50% isodose volume divided by target volume, maximum dose (Dmax), and mean dose (Dmean) were calculated. Mean lung dose (MLD), V20 Gy, and V5 Gy for the lung (defined as the volumes of lung receiving at least 20 and 5 Gy), mean heart dose, and Dmax to the spinal cord were measured as doses to organs at risk (OARs). Paired t-tests were used for statistical analysis. Results: HI was inversely related to changes in leaf margin. Conformity index and modified GI initially decreased as leaf margin width increased. After reaching a minimum, the two values then increased as leaf margin increased (“V” shape). The optimal leaf margins for conformity index and modified GI were −1.1 ± 0.3 mm (mean ± 1 SD) and −0.2 ± 0.9 mm, respectively, for 7 MV FFF compared to −1.0 ± 0.4 and −0.3 ± 0.9 mm, respectively, for 6 MV FF. Dmax and Dmean for 7 MV FFF were higher than those for 6 MV FF by 3.6% and 1.7%, respectively. There was a positive correlation between the ratios of HI, Dmax, and Dmean for 7 MV FFF to those for 6 MV FF and PTV size (R = 0.767, 0.809, and 0.643, respectively). The differences in MLD, V20 Gy, and V5 Gy for lung between FFF and FF beams were negligible. The optimal leaf margins for MLD, V20 Gy, and V5 Gy for lung were −0.9 ± 0.6, −1.1 ± 0.8, and −2.1 ± 1.2 mm, respectively, for 7 MV FFF compared

  19. Geometric optimization of a step bearing for a hydrodynamically levitated centrifugal blood pump for the reduction of hemolysis.

    PubMed

    Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2013-09-01

    A hydrodynamically levitated centrifugal blood pump with a semi-open impeller has been developed for mechanical circulatory assistance. However, a narrow bearing gap has the potential to cause hemolysis. The purpose of the present study is to optimize the geometric configuration of the hydrodynamic step bearing in order to reduce hemolysis by expansion of the bearing gap. First, a numerical analysis of the step bearing, based on lubrication theory, was performed to determine the optimal design. Second, in order to assess the accuracy of the numerical analysis, the hydrodynamic forces calculated in the numerical analysis were compared with those obtained in an actual measurement test using impellers having step lengths of 0%, 33%, and 67% of the vane length. Finally, a bearing gap measurement test and a hemolysis test were performed. As a result, the numerical analysis revealed that the hydrodynamic force was the largest when the step length was approximately 70%. The hydrodynamic force calculated in the numerical analysis was approximately equivalent to that obtained in the measurement test. In the measurement test and the hemolysis test, the blood pump having a step length of 67% achieved the maximum bearing gap and reduced hemolysis, as compared with the pumps having step lengths of 0% and 33%. It was confirmed that the numerical analysis of the step bearing was effective, and the developed blood pump having a step length of approximately 70% was found to be a suitable configuration for the reduction of hemolysis. PMID:23834855

  20. Removing spurious signals from glow curves using an optimal Wiener filter.

    PubMed

    van Dijk, J W E; Stadtmann, H; Grimbergen, T W M

    2011-03-01

    During readout, the signal of the TLD is occasionally polluted with spurious signals. These most often take the shape of a spike on the glow curve. Often these spikes are only a few milliseconds wide but can have a height that significantly influences the outcome of the dose evaluation. The detection of spikes relies generally on comparing the raw glow curve with a smoothed version of it. A spike is detected when the height of the glow curve exceeds that of the smoothed curve, using criteria based on the absolute and relative differences. The procedure proposed is based on smoothing by an optimal Wiener filter, which is, on its turn, based on Fourier analysis for which numerically very efficient methods are available. Apart from having easy to understand tuning parameters, an attractive bonus is that, with only little additional computational effort, estimates of the position of peak maxima are found from second and third derivatives: a useful feature for glow curve quality control. PMID:21450703

  1. Numerical simulation of an industrial microwave assisted filter dryer: criticality assessment and optimization.

    PubMed

    Leonelli, Cristina; Veronesi, Paolo; Grisoni, Fabio

    2007-01-01

    Industrial-scale filter dryers, equipped with one or more microwave input ports, have been modelled with the aim of detecting existing criticalities, proposing possible solutions and optimizing the overall system efficiency and treatment homogeneity. Three different loading conditions have been simulated, namely the empty applicator, the applicator partially loaded by both a high-loss and low loss load whose dielectric properties correspond to the one measured on real products. Modeling results allowed for the implementation of improvements to the original design such as the insertion of a wave guide transition and a properly designed pressure window, modification of the microwave inlet's position and orientation, alteration of the nozzles' geometry and distribution, and changing of the cleaning metallic torus dimensions and position. Experimental testing on representative loads, as well as in production sites, allowed for the confirmation of the validity of the implemented improvements, thus showing how numerical simulation can assist the designer in removing critical features and improving equipment performances when moving from conventional heating to hybrid microwave-assisted processing. PMID:18350999

  2. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  3. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  4. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  5. Selection of plants for optimization of vegetative filter strips treating runoff from turfgrass.

    PubMed

    Smith, Katy E; Putnam, Raymond A; Phaneuf, Clifford; Lanza, Guy R; Dhankher, Om P; Clark, John M

    2008-01-01

    Runoff from turf environments, such as golf courses, is of increasing concern due to the associated chemical contamination of lakes, reservoirs, rivers, and ground water. Pesticide runoff due to fungicides, herbicides, and insecticides used to maintain golf courses in acceptable playing condition is a particular concern. One possible approach to mitigate such contamination is through the implementation of effective vegetative filter strips (VFS) on golf courses and other recreational turf environments. The objective of the current study was to screen ten aesthetically acceptable plant species for their ability to remove four commonly-used and degradable pesticides: chlorpyrifos (CP), chlorothalonil (CT), pendimethalin (PE), and propiconazole (PR) from soil in a greenhouse setting, thus providing invaluable information as to the species composition that would be most efficacious for use in VFS surrounding turf environments. Our results revealed that blue flag iris (Iris versicolor) (76% CP, 94% CT, 48% PE, and 33% PR were lost from soil after 3 mo of plant growth), eastern gama grass (Tripsacum dactyloides) (47% CP, 95% CT, 17% PE, and 22% PR were lost from soil after 3 mo of plant growth), and big blue stem (Andropogon gerardii) (52% CP, 91% CT, 19% PE, and 30% PR were lost from soil after 3 mo of plant growth) were excellent candidates for the optimization of VFS as buffer zones abutting turf environments. Blue flag iris was most effective at removing selected pesticides from soil and had the highest aesthetic value of the plants tested. PMID:18689747

  6. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    NASA Astrophysics Data System (ADS)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  7. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  8. Filtering of Defects in Semipolar (11−22) GaN Using 2-Steps Lateral Epitaxial Overgrowth

    PubMed Central

    2010-01-01

    Good-quality (11−22) semipolar GaN sample was obtained using epitaxial lateral overgrowth. The growth conditions were chosen to enhance the growth rate along the [0001] inclined direction. Thus, the coalescence boundaries stop the propagation of basal stacking faults. The faults filtering and the improvement of the crystalline quality were attested by transmission electron microscopy and low temperature photoluminescence. The temperature dependence of the luminescence polarization under normal incidence was also studied. PMID:21170140

  9. Optimization of synthesis and peptization steps to obtain iron oxide nanoparticles with high energy dissipation rates

    NASA Astrophysics Data System (ADS)

    Mérida, Fernando; Chiu-Lam, Andreina; Bohórquez, Ana C.; Maldonado-Camargo, Lorena; Pérez, María-Eglée; Pericchi, Luis; Torres-Lugo, Madeline; Rinaldi, Carlos

    2015-11-01

    Magnetic Fluid Hyperthermia (MFH) uses heat generated by magnetic nanoparticles exposed to alternating magnetic fields to cause a temperature increase in tumors to the hyperthermia range (43-47 °C), inducing apoptotic cancer cell death. As with all cancer nanomedicines, one of the most significant challenges with MFH is achieving high nanoparticle accumulation at the tumor site. This motivates development of synthesis strategies that maximize the rate of energy dissipation of iron oxide magnetic nanoparticles, preferable due to their intrinsic biocompatibility. This has led to development of synthesis strategies that, although attractive from the point of view of chemical elegance, may not be suitable for scale-up to quantities necessary for clinical use. On the other hand, to date the aqueous co-precipitation synthesis, which readily yields gram quantities of nanoparticles, has only been reported to yield sufficiently high specific absorption rates after laborious size selective fractionation. This work focuses on improvements to the aqueous co-precipitation of iron oxide nanoparticles to increase the specific absorption rate (SAR), by optimizing synthesis conditions and the subsequent peptization step. Heating efficiencies up to 1048 W/gFe (36.5 kA/m, 341 kHz; ILP=2.3 nH m2 kg-1) were obtained, which represent one of the highest values reported for iron oxide particles synthesized by co-precipitation without size-selective fractionation. Furthermore, particles reached SAR values of up to 719 W/gFe (36.5 kA/m, 341 kHz; ILP=1.6 nH m2 kg-1) when in a solid matrix, demonstrating they were capable of significant rates of energy dissipation even when restricted from physical rotation. Reduction in energy dissipation rate due to immobilization has been identified as an obstacle to clinical translation of MFH. Hence, particles obtained with the conditions reported here have great potential for application in nanoscale thermal cancer therapy.

  10. Toward an Optimal Position for IVC Filters: Computational Modeling of the Impact of Renal Vein Inflow

    SciTech Connect

    Wang, S L; Singer, M A

    2009-07-13

    The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.

  11. Optimizing the anode-filter combination in the sense of image quality and average glandular dose in digital mammography

    NASA Astrophysics Data System (ADS)

    Varjonen, Mari; Strömmer, Pekka

    2008-03-01

    This paper presents the optimized image quality and average glandular dose in digital mammography, and provides recommendations concerning anode-filter combinations in digital mammography, which is based on amorphous selenium (a-Se) detector technology. The full field digital mammography (FFDM) system based on a-Se technology, which is also a platform of tomosynthesis prototype, was used in this study. X-ray tube anode-filter combinations, which we studied, were tungsten (W) - rhodium (Rh) and tungsten (W) - silver (Ag). Anatomically adaptable fully automatic exposure control (AAEC) was used. The average glandular doses (AGD) were calculated using a specific program developed by Planmed, which automates the method described by Dance et al. Image quality was evaluated in two different ways: a subjective image quality evaluation, and contrast and noise analysis. By using W-Rh and W-Ag anode-filter combinations can be achieved a significantly lower average glandular dose compared with molybdenum (Mo) - molybdenum (Mo) or Mo-Rh. The average glandular dose reduction was achieved from 25 % to 60 %. In the future, the evaluation will concentrate to study more filter combinations and the effect of higher kV (>35 kV) values, which seems be useful while optimizing the dose in digital mammography.

  12. Nature-inspired optimization of quasicrystalline arrays and all-dielectric optical filters and metamaterials

    NASA Astrophysics Data System (ADS)

    Namin, Frank Farhad A.

    (photonic resonance) and the plasmonic response of the spheres (plasmonic resonance). In particular the couplings between the photonic and plasmonic modes are studied. In periodic arrays this coupling leads to the formation of a so called photonic-plasmonic hybrid mode. The formation of hybrid modes is studied in quasicrystalline arrays. Quasicrystalline structures in essence possess several periodicities which in some cases can lead to the formation of multiple hybrid modes with wider bandwidths. It is also demonstrated that the performance of these arrays can be further enhanced by employing a perturbation method. The second property considered is local field enhancements in quasicrystalline arrays of gold nanospheres. It will be shown that despite a considerably smaller filling factor quasicrystalline arrays generate larger local field enhancements which can be even further enhanced by optimally placing perturbing spheres within the prototiles that comprise the aperiodic arrays. The second thrust of research in this dissertation focuses on designing all-dielectric filters and metamaterial coatings for the optical range. In higher frequencies metals tend to have a high loss and thus they are not suitable for many applications. Hence dielectrics are used for applications in optical frequencies. In particular we focus on designing two types of structures. First a near-perfect optical mirror is designed. The design is based on optimizing a subwavelength periodic dielectric grating to obtain appropriate effective parameters that will satisfy the desired perfect mirror condition. Second, a broadband anti-reflective all-dielectric grating with wide field of view is designed. The second design is based on a new computationally efficient genetic algorithm (GA) optimization method which shapes the sidewalls of the grating based on optimizing the roots of polynomial functions.

  13. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: Spectral optimization and preliminary phantom measurement

    SciTech Connect

    Saito, Masatoshi

    2007-11-15

    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.

  14. Near-Diffraction-Limited Operation of Step-Index Large-Mode-Area Fiber Lasers Via Gain Filtering

    SciTech Connect

    Marciante, J.R.; Roides, R.G.; Shkunov, V.V.; Rockwell, D.A.

    2010-06-04

    We present, for the first time to our knowledge, an explicit experimental comparison of beam quality in conventional and confined-gain multimode fiber lasers. In the conventional fiber laser, beam quality degrades with increasing output power. In the confined-gain fiber laser, the beam quality is good and does not degrade with output power. Gain filtering of higher-order modes in 28 μm diameter core fiber lasers is demonstrated with a beam quality of M^2 = 1.3 at all pumping levels. Theoretical modeling is shown to agree well with experimentally observed trends.

  15. Focusing time harmonic scalar fields in non-homogenous lossy media: Inverse filter vs. constrained power focusing optimization

    NASA Astrophysics Data System (ADS)

    Iero, D. A. M.; Isernia, T.; Crocco, L.

    2013-08-01

    Two strategies to focus time harmonic scalar fields in known inhomogeneous lossy media are compared. The first one is the Inverse Filter (IF) method, which faces the focusing task as the synthesis of a nominal field. The second one is the Constrained Power Focusing Optimization (CPFO) method, which tackles the problem in terms of constrained mask constrained power optimization. Numerical examples representative of focusing in noninvasive microwave hyperthermia are provided to show that CPFO is able to outperform IF, thanks to the additional degrees of freedom arising from the adopted power synthesis formulation.

  16. Optimal Scaling of Filtered GRACE dS/dt Anomalies over Sacramento and San Joaquin River Basins, California

    NASA Astrophysics Data System (ADS)

    Ukasha, M.; Ramirez, J. A.

    2014-12-01

    Signals from Gravity Recovery and Climate Experiments (GRACE) twin satellites mission mapping the time invariant earth's gravity field are degraded due to measurement and leakage errors. Dampening of these errors using different filters results in a modification of the true geophysical signals. Therefore, use of a scale factor is suggested to recover the modified signals. For basin averaged dS/dt anomalies computed from data available at University of Colorado GRACE data analysis website - http://geoid.colorado.edu/grace/, optimal time invariant and time variant scale factors for Sacramento and San Joaquin river basins, California, are derived using observed precipitation (P), runoff (Q) and evapotranspiration (ET). Using the derived optimal scaling factor for GRACE data filtered using a 300 km- wide gaussian filter resulted in scaled GRACE dS/dt anomalies that match better with observed dS/dt anomalies (P-ET-Q) as compared to the GRACE dS/dt anomalies computed from scaled GRACE product at University of Colorado GRACE data analysis website. This paper will present the procedure, the optimal values, and the statistical analysis of the results.

  17. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition. PMID:26681183

  18. Optimal filter design for shielded and unshielded ambient noise reduction in fetal magnetocardiography

    NASA Astrophysics Data System (ADS)

    Comani, S.; Mantini, D.; Alleva, G.; Di Luzio, S.; Romani, G. L.

    2005-12-01

    The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz.

  19. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  20. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  1. Optimization of the filter parameters in (99m)Tc myocardial perfusion SPECT studies: the formulation of flowchart.

    PubMed

    Shibutani, Takayuki; Onoguchi, Masahisa; Yamada, Tomoki; Kamida, Hiroki; Kunishita, Kohei; Hayashi, Yuuki; Nakajima, Tadashi; Kinuya, Seigo

    2016-06-01

    Myocardial perfusion single photon emission computed tomography (SPECT) is typically subject to a variation in image quality due to the use of different acquisition protocols, image reconstruction parameters and image display settings by each institution. One of the principal image reconstruction parameters is the Butterworth filter cut-off frequency, a parameter strongly affecting the quality of myocardial images. The objective of this study was to formulate a flowchart for the determination of the optimal parameters of the Butterworth filter for filtered back projection (FBP), ordered subset expectation maximization (OSEM) and collimator-detector response compensation OSEM (CDR-OSEM) methods using the evaluation system of the myocardial image based on technical grounds phantom. SPECT studies were acquired for seven simulated defects where the average counts of the normal myocardial components of 45° left anterior oblique projections were approximately 10-120 counts/pixel. These SPECT images were then reconstructed by FBP, OSEM and CDR-OSEM methods. Visual and quantitative assessment of short axis images were performed for the defect and normal parts. Finally, we formulated a flowchart indicating the optimal image processing procedure for SPECT images. Correlation between normal myocardial counts and the optimal cut-off frequency could be represented as a regression expression, which had high or medium coefficient of determination. We formulated the flowchart in order to optimize the image reconstruction parameters based on a comprehensive assessment, which enabled us to perform objectively processing. Furthermore, the usefulness of image reconstruction using the flowchart was demonstrated by a clinical case. PMID:27052439

  2. Single-channel noise reduction using unified joint diagonalization and optimal filtering

    NASA Astrophysics Data System (ADS)

    Nørholm, Sidsel Marie; Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-12-01

    In this paper, the important problem of single-channel noise reduction is treated from a new perspective. The problem is posed as a filtering problem based on joint diagonalization of the covariance matrices of the desired and noise signals. More specifically, the eigenvectors from the joint diagonalization corresponding to the least significant eigenvalues are used to form a filter, which effectively estimates the noise when applied to the observed signal. This estimate is then subtracted from the observed signal to form an estimate of the desired signal, i.e., the speech signal. In doing this, we consider two cases, where, respectively, no distortion and distortion are incurred on the desired signal. The former can be achieved when the covariance matrix of the desired signal is rank deficient, which is the case, for example, for voiced speech. In the latter case, the covariance matrix of the desired signal is full rank, as is the case, for example, in unvoiced speech. Here, the amount of distortion incurred is controlled via a simple, integer parameter, and the more distortion allowed, the higher the output signal-to-noise ratio (SNR). Simulations demonstrate the properties of the two solutions. In the distortionless case, the proposed filter achieves only a slightly worse output SNR, compared to the Wiener filter, along with no signal distortion. Moreover, when distortion is allowed, it is possible to achieve higher output SNRs compared to the Wiener filter. Alternatively, when a lower output SNR is accepted, a filter with less signal distortion than the Wiener filter can be constructed.

  3. Fiber Bragg grating based notch filter for bit-rate-transparent NRZ to PRZ format conversion with two-degree-of-freedom optimization

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Shu, Xuewen; Atai, Javid; Zuo, Jun; Xiong, Bangyun; Shen, Fangcheng; Liu, xin; Cheng, Jianqun

    2015-02-01

    We propose a novel notch-filtering scheme for bit-rate transparent all-optical NRZ-to-PRZ format conversion. The scheme is based on a two-degree-of-freedom optimally designed fiber Bragg grating. It is shown that a notch filter optimized for any specific operating bit rate can be used to realize high-Q-factor format conversion over a wide bit rate range without requiring any tuning.

  4. Research on improved mechanism for particle filter

    NASA Astrophysics Data System (ADS)

    Yu, Jinxia; Xu, Jingmin; Tang, Yongli; Zhao, Qian

    2013-03-01

    Based on the analysis of particle filter algorithm, two improved mechanism are studied so as to improve the performance of particle filter. Firstly, hybrid proposal distribution with annealing parameter is studied in order to use current information of the latest observed measurement to optimize particle filter. Then, resampling step in particle filter is improved by two methods which are based on partial stratified resampling (PSR). One is that it uses the optimal idea to improve the weights after implementing PSR, and the other is that it uses the optimal idea to improve the weights before implementing PSR and uses adaptive mutation operation for all particles so as to assure the diversity of particle sets after PSR. At last, the simulations based on single object tracking are implemented, and the performances of the improved mechanism for particle filter are estimated.

  5. SU-E-I-57: Evaluation and Optimization of Effective-Dose Using Different Beam-Hardening Filters in Clinical Pediatric Shunt CT Protocol

    SciTech Connect

    Gill, K; Aldoohan, S; Collier, J

    2014-06-01

    Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measure CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.

  6. Design and optimization of fundamental mode filters based on long-period fiber gratings

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Yang; Wei, Jin; Sheng, Yong; Ren, Nai-Fei

    2016-07-01

    A segment of long-period fiber grating (LPFG) that can selectively filter the fundamental mode in the few-mode optical fiber is proposed. By applying an appropriate chosen surrounding material and an apodized configuration of LPFG, high fundamental mode loss and low high-order core mode loss can be achieved simultaneously. In addition, we propose a method of cascading LPFGs with different periods to expand the bandwidth of the mode filter. Numerical simulation shows that the operating bandwidth of the cascade structure can be as large as 23 nm even if the refractive index of the surrounding liquid varies with the environment temperature.

  7. Optimization of excitation-emission band-pass filter for visualization of viable bacteria distribution on the surface of pork meat.

    PubMed

    Nishino, Ken; Nakamura, Kazuaki; Tsuta, Mizuki; Yoshimura, Masatoshi; Sugiyama, Junichi; Nakauchi, Shigeki

    2013-05-20

    A novel method of optically reducing the dimensionality of an excitation-emission matrix (EEM) by optimizing the excitation and emission band-pass filters was proposed and applied to the visualization of viable bacteria on pork. Filters were designed theoretically using an EEM data set for evaluating colony-forming units on pork samples assuming signal-to-noise ratios of 100, 316, or 1000. These filters were evaluated using newly measured EEM images. The filters designed for S/N = 100 performed the best and allowed the visualization of viable bacteria distributions. The proposed method is expected to be a breakthrough in the application of EEM imaging. PMID:23736477

  8. Numerical experiment optimization to obtain the characteristics of the centrifugal pump steps package

    NASA Astrophysics Data System (ADS)

    Boldyrev, S. V.; Boldyrev, A. V.

    2014-12-01

    The numerical simulation method of turbulent flow in a running space of the working-stage in a centrifugal pump using the periodicity conditions has been formulated. The proposed method allows calculating the characteristic indices of one pump step at a lower computing resources cost. The comparison of the pump characteristics' calculation results with pilot data has been conducted.

  9. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first

  10. Optimizing planar lipid bilayer single-channel recordings for high resolution with rapid voltage steps.

    PubMed Central

    Wonderlin, W F; Finkel, A; French, R J

    1990-01-01

    We describe two enhancements of the planar bilayer recording method which enable low-noise recordings of single-channel currents activated by voltage steps in planar bilayers formed on apertures in partitions separating two open chambers. First, we have refined a simple and effective procedure for making small bilayer apertures (25-80 micrograms diam) in plastic cups. These apertures combine the favorable properties of very thin edges, good mechanical strength, and low stray capacitance. In addition to enabling formation of small, low-capacitance bilayers, this aperture design also minimizes the access resistance to the bilayer, thereby improving the low-noise performance. Second, we have used a patch-clamp headstage modified to provide logic-controlled switching between a high-gain (50 G omega) feedback resistor for high-resolution recording and a low-gain (50 M omega) feedback resistor for rapid charging of the bilayer capacitance. The gain is switched from high to low before a voltage step and then back to high gain 25 microseconds after the step. With digital subtraction of the residual currents produced by the gain switching and electrostrictive changes in bilayer capacitance, we can achieve a steady current baseline within 1 ms after the voltage step. These enhancements broaden the range of experimental applications for the planar bilayer method by combining the high resolution previously attained only with small bilayers formed on pipette tips with the flexibility of experimental design possible with planar bilayers in open chambers. We illustrate application of these methods with recordings of the voltage-step activation of a voltage-gated potassium channel. PMID:1698470

  11. Optimization of plasma parameters with magnetic filter field and pressure to maximize H- ion density in a negative hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young; Chung, Kyoung-Jae; Hwang, Y. S.

    2016-02-01

    Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuir probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H- populations for various filter field strengths and pressures. Enhanced H- population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H- sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region.

  12. Optimization of plasma parameters with magnetic filter field and pressure to maximize H⁻ ion density in a negative hydrogen ion source.

    PubMed

    Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young; Chung, Kyoung-Jae; Hwang, Y S

    2016-02-01

    Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuir probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H(-) populations for various filter field strengths and pressures. Enhanced H(-) population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H(-) sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region. PMID:26932018

  13. Optimization of a femtosecond Ti : sapphire amplifier using a acouto-optic programmable dispersive filter and a genetic algorithm.

    SciTech Connect

    Korovyanko, O. J.; Rey-de-Castro, R.; Elles, C. G.; Crowell, R. A.; Li, Y.

    2006-01-01

    The temporal output of a Ti:Sapphire laser system has been optimized using an acousto-optic programmable dispersive filter and a genetic algorithm. In-situ recording the evolution of spectral phase, amplitude and temporal pulse profile for each iteration of the algorithm using SPIDER shows that we are able to lock the spectral phase of the laser pulse within a narrow margin. By using the second harmonic of the CPA laser as feedback for the genetic algorithm, it has been demonstrated that severe mismatch between the compressor and stretcher can be compensated for in a short period of time.

  14. A multiobjective optimization approach for combating Aedes aegypti using chemical and biological alternated step-size control.

    PubMed

    Dias, Weverton O; Wanner, Elizabeth F; Cardoso, Rodrigo T N

    2015-11-01

    Dengue epidemics, one of the most important viral disease worldwide, can be prevented by combating the transmission vector Aedes aegypti. In support of this aim, this article proposes to analyze the Dengue vector control problem in a multiobjective optimization approach, in which the intention is to minimize both social and economic costs, using a dynamic mathematical model representing the mosquitoes' population. It consists in finding optimal alternated step-size control policies combining chemical (via application of insecticides) and biological control (via insertion of sterile males produced by irradiation). All the optimal policies consists in apply insecticides just at the beginning of the season and, then, keep the mosquitoes in an acceptable level spreading into environment a few amount of sterile males. The optimization model analysis is driven by the use of genetic algorithms. Finally, it performs a statistic test showing that the multiobjective approach is effective in achieving the same effect of variations in the cost parameters. Then, using the proposed methodology, it is possible to find, in a single run, given a decision maker, the optimal number of days and the respective amounts in which each control strategy must be applied, according to the tradeoff between using more insecticide with less transmission mosquitoes or more sterile males with more transmission mosquitoes. PMID:26362231

  15. Optimization of 3D laser scanning speed by use of combined variable step

    NASA Astrophysics Data System (ADS)

    Garcia-Cruz, X. M.; Sergiyenko, O. Yu.; Tyrsa, Vera; Rivas-Lopez, M.; Hernandez-Balbuena, D.; Rodriguez-Quiñonez, J. C.; Basaca-Preciado, L. C.; Mercorelli, P.

    2014-03-01

    The problem of 3D TVS slow functioning caused by constant small scanning step becomes its solution in the presented research. It can be achieved by combined scanning step application for the fast search of n obstacles in unknown surroundings. Such a problem is of keynote importance in automatic robot navigation. To maintain a reasonable speed robots must detect dangerous obstacles as soon as possible, but all known scanners able to measure distances with sufficient accuracy are unable to do it in real time. So, the related technical task of the scanning with variable speed and precise digital mapping only for selected spatial sectors is under consideration. A wide range of simulations in MATLAB 7.12.0 of several variants of hypothetic scenes with variable n obstacles in each scene (including variation of shapes and sizes) and scanning with incremented angle value (0.6° up to 15°) is provided. The aim of such simulation was to detect which angular values of interval still permit getting the maximal information about obstacles without undesired time losses. Three of such local maximums were obtained in simulations and then rectified by application of neuronal network formalism (Levenberg-Marquradt Algorithm). The obtained results in its turn were applied to MET (Micro-Electro-mechanical Transmission) design for practical realization of variable combined step scanning on an experimental prototype of our previously known laser scanner.

  16. A COMPARISON OF MODEL BASED AND DIRECT OPTIMIZATION BASED FILTERING ALGORITHMS FOR SHEARWAVE VELOCITY RECONSTRUCTION FOR ELECTRODE VIBRATION ELASTOGRAPHY

    PubMed Central

    Ingle, Atul; Varghese, Tomy

    2014-01-01

    Tissue stiffness estimation plays an important role in cancer detection and treatment. The presence of stiffer regions in healthy tissue can be used as an indicator for the possibility of pathological changes. Electrode vibration elastography involves tracking of a mechanical shear wave in tissue using radio-frequency ultrasound echoes. Based on appropriate assumptions on tissue elasticity, this approach provides a direct way of measuring tissue stiffness from shear wave velocity, and enabling visualization in the form of tissue stiffness maps. In this study, two algorithms for shear wave velocity reconstruction in an electrode vibration setup are presented. The first method models the wave arrival time data using a hidden Markov model whose hidden states are local wave velocities that are estimated using a particle filter implementation. This is compared to a direct optimization-based function fitting approach that uses sequential quadratic programming to estimate the unknown velocities and locations of interfaces. The mean shear wave velocities obtained using the two algorithms are within 10%of each other. Moreover, the Young’s modulus estimates obtained from an incompressibility assumption are within 15 kPa of those obtained from the true stiffness data obtained from mechanical testing. Based on visual inspection of the two filtering algorithms, the particle filtering method produces smoother velocity maps. PMID:25285187

  17. Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-dimensional Bilateral Filter

    SciTech Connect

    Bethel, E. Wes; Bethel, E. Wes

    2012-01-06

    This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.

  18. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  19. Pareto optimality between width of central lobe and peak sidelobe intensity in the far-field pattern of lossless phase-only filters for enhancement of transverse resolution.

    PubMed

    Mukhopadhyay, Somparna; Hazra, Lakshminarayan

    2015-11-01

    Resolution capability of an optical imaging system can be enhanced by reducing the width of the central lobe of the point spread function. Attempts to achieve the same by pupil plane filtering give rise to a concomitant increase in sidelobe intensity. The mutual exclusivity between these two objectives may be considered as a multiobjective optimization problem that does not have a unique solution; rather, a class of trade-off solutions called Pareto optimal solutions may be generated. Pareto fronts in the synthesis of lossless phase-only pupil plane filters to achieve superresolution with prespecified lower limits for the Strehl ratio are explored by using the particle swarm optimization technique. PMID:26560575

  20. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    PubMed

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  1. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    PubMed Central

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  2. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1991-01-01

    Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  3. Rod-filter-field optimization of the J-PARC RF-driven H{sup −} ion source

    SciTech Connect

    Ueno, A. Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2015-04-08

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H{sup −} ion beam of 60mA within normalized emittances of 1.5πmm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500μs×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H{sup −} ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H{sup −} ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM’s gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H{sup −} ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM’s cross-section (magnetmotive force) was indispensable for easy operation with the temperature (T{sub PE}) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM’s cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for T{sub PE} around 60°C.

  4. Optimizing multi-step B-side charge separation in photosynthetic reaction centers from Rhodobacter capsulatus.

    PubMed

    Faries, Kaitlyn M; Kressel, Lucas L; Dylla, Nicholas P; Wander, Marc J; Hanson, Deborah K; Holten, Dewey; Laible, Philip D; Kirmaier, Christine

    2016-02-01

    Using high-throughput methods for mutagenesis, protein isolation and charge-separation functionality, we have assayed 40 Rhodobacter capsulatus reaction center (RC) mutants for their P(+)QB(-) yield (P is a dimer of bacteriochlorophylls and Q is a ubiquinone) as produced using the normally inactive B-side cofactors BB and HB (where B is a bacteriochlorophyll and H is a bacteriopheophytin). Two sets of mutants explore all possible residues at M131 (M polypeptide, native residue Val near HB) in tandem with either a fixed His or a fixed Asn at L181 (L polypeptide, native residue Phe near BB). A third set of mutants explores all possible residues at L181 with a fixed Glu at M131 that can form a hydrogen bond to HB. For each set of mutants, the results of a rapid millisecond screening assay that probes the yield of P(+)QB(-) are compared among that set and to the other mutants reported here or previously. For a subset of eight mutants, the rate constants and yields of the individual B-side electron transfer processes are determined via transient absorption measurements spanning 100 fs to 50 μs. The resulting ranking of mutants for their yield of P(+)QB(-) from ultrafast experiments is in good agreement with that obtained from the millisecond screening assay, further validating the efficient, high-throughput screen for B-side transmembrane charge separation. Results from mutants that individually show progress toward optimization of P(+)HB(-)→P(+)QB(-) electron transfer or initial P*→P(+)HB(-) conversion highlight unmet challenges of optimizing both processes simultaneously. PMID:26658355

  5. Optimization of isopropanol and ammonium sulfate precipitation steps in the purification of plasmid DNA.

    PubMed

    Freitas, S S; Santos, J A L; Prazeres, D M F

    2006-01-01

    Large-scale processes used to manufacture grams of plasmid DNA (pDNA) should be cGMP compliant, economically feasible, and environmentally friendly. Alcohol and salt precipitation techniques are frequently used in plasmid DNA (pDNA) downstream processing, as concentration and prepurification steps, respectively. This work describes a study of a standard 2-propanol (IsopOH; 0.7 v/v) and ammonium sulfate (AS; 2.5 M) precipitation. When inserted in a full process, this tandem precipitation scheme represents a high economic and environmental impact due to the large amounts of the two precipitant agents and their environmental relevance. Thus, major goals of the study were the minimization of precipitants and the selection of the best operating conditions for high pDNA recovery and purity. The pDNA concentration in the starting Escherichia coli alkaline lysate strongly affected the efficiency of IsopOH precipitation as a concentration step. The results showed that although an IsopOH concentration of at least 0.6 (v/v) was required to maximize recovery when using lysates with less than 80 microg pDNA/mL, concentrations as low as 0.4 v/v could be used with more concentrated lysates (170 microg pDNA/mL). Following resuspension of pDNA pellets generated by 0.6 v/v IsopOH, precipitation at 4 degrees C with 2.4 M AS consistently resulted in recoveries higher than 80% and in removal of more than 90% of the impurities (essentially RNA). An experimental design further indicated that AS concentrations could be reduced down to 2.0 M, resulting in an acceptable purity (21-23%) without compromising recovery (84-86%). Plasmid recovery and purity after the sequential IsopOH/AS precipitation could be further improved by increasing the concentration factor (CF) upon IsopOH precipitation from 2 up to 25. Under these conditions, IsopOH and AS concentrations of 0.60 v/v and 1.6 M resulted in high recovery (approximately 100%) and purity (32%). In conclusion, it is possible to reduce

  6. Control system optimization studies. Volume 2: High frequency cutoff filter analysis

    NASA Technical Reports Server (NTRS)

    Fong, M. H.

    1972-01-01

    The problem of digital implementation of a cutoff filter is approached with consideration to word length, sampling rate, accuracy requirements, computing time and hardware restrictions. Computing time and hardware requirements for four possible programming forms for the linear portions of the filter are determined. Upper bounds for the steady state system output error due to quantization for digital control systems containing a digital network programmed both in the direct form and in the canonical form are derived. This is accomplished by defining a set of error equations in the z domain and then applying the final value theorem to the solution. Quantization error was found to depend upon the digital word length, sampling rate, and system time constants. The error bound developed may be used to estimate the digital word length and sampling rate required to achieve a given system specification. From the quantization error accumulation, computing time and hardware point of view, and the fact that complex poles and zeros must be realized, the canonical form of programming seems preferable.

  7. Optimization of conditions for the single step IMAC purification of miraculin from Synsepalum dulcificum.

    PubMed

    He, Zuxing; Tan, Joo Shun; Lai, Oi Ming; Ariff, Arbakariya B

    2015-08-15

    In this study, the methods for extraction and purification of miraculin from Synsepalum dulcificum were investigated. For extraction, the effect of different extraction buffers (phosphate buffer saline, Tris-HCl and NaCl) on the extraction efficiency of total protein was evaluated. Immobilized metal ion affinity chromatography (IMAC) with nickel-NTA was used for the purification of the extracted protein, where the influence of binding buffer pH, crude extract pH and imidazole concentration in elution buffer upon the purification performance was explored. The total amount of protein extracted from miracle fruit was found to be 4 times higher using 0.5M NaCl as compared to Tris-HCl and phosphate buffer saline. On the other hand, the use of Tris-HCl as binding buffer gave higher purification performance than sodium phosphate and citrate-phosphate buffers in IMAC system. The optimum purification condition of miraculin using IMAC was achieved with crude extract at pH 7, Tris-HCl binding buffer at pH 7 and the use of 300 mM imidazole as elution buffer, which gave the overall yield of 80.3% and purity of 97.5%. IMAC with nickel-NTA was successfully used as a single step process for the purification of miraculin from crude extract of S. dulcificum. PMID:25794715

  8. Reaction null-space filter: extracting reactionless synergies for optimal postural balance from motion capture data.

    PubMed

    Nenchev, D N; Miyamoto, Y; Iribe, H; Takeuchi, K; Sato, D

    2016-06-01

    This paper introduces the notion of a reactionless synergy: a postural variation for a specific motion pattern/strategy, whereby the movements of the segments do not alter the force/moment balance at the feet. Given an optimal initial posture in terms of stability, a reactionless synergy can ensure optimality throughout the entire movement. Reactionless synergies are derived via a dynamical model wherein the feet are regarded to be unfixed. Though in contrast with the conventional fixed-feet models, this approach has the advantage of exhibiting the reactions at the feet explicitly. The dynamical model also facilitates a joint-space decomposition scheme yielding two motion components: the reactionless synergy and an orthogonal complement responsible for the dynamical coupling between the feet and the support. Since the reactionless synergy provides the basis (a feedforward control component) for optimal balance control, it may play an important role when evaluating balance abnormalities or when assessing optimality in balance control. We show how to apply the proposed method for analysis of motion capture data obtained from three voluntary movement patterns in the sagittal plane: squat, sway, and forward bend. PMID:26273732

  9. Optimal spatial filtering and transfer function for SAR ocean wave spectra

    NASA Technical Reports Server (NTRS)

    Beal, R. C.; Tilley, D. G.

    1981-01-01

    The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.

  10. Optimal spatial filtering and transfer function for SAR ocean wave spectra

    NASA Astrophysics Data System (ADS)

    Beal, R. C.; Tilley, D. G.

    1981-06-01

    The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.