Science.gov

Sample records for optimized filtering step

  1. STEPS: a grid search methodology for optimized peptide identification filtering of MS/MS database search results.

    PubMed

    Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D

    2013-03-01

    For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.

  2. STEPS: A Grid Search Methodology for Optimized Peptide Identification Filtering of MS/MS Database Search Results

    SciTech Connect

    Piehowski, Paul D.; Petyuk, Vladislav A.; Sandoval, John D.; Burnum, Kristin E.; Kiebel, Gary R.; Monroe, Matthew E.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2013-03-01

    For bottom-up proteomics there are a wide variety of database searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection - referred to as STEPS - utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.

  3. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

    1998-04-30

    Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based

  4. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

    2002-06-30

    Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program

  5. Adaptive Mallow's optimization for weighted median filters

    NASA Astrophysics Data System (ADS)

    Rachuri, Raghu; Rao, Sathyanarayana S.

    2002-05-01

    This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.

  6. Steps Toward Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2006-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a

  7. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  8. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  9. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the

  10. Particle Swarm Optimization with Dynamic Step Length

    NASA Astrophysics Data System (ADS)

    Cui, Zhihua; Cai, Xingjuan; Zeng, Jianchao; Sun, Guoji

    Particle swarm optimization (PSO) is a robust swarm intelligent technique inspired from birds flocking and fish schooling. Though many effective improvements have been proposed, however, the premature convergence is still its main problem. Because each particle's movement is a continuous process and can be modelled with differential equation groups, a new variant, particle swarm optimization with dynamic step length (PSO-DSL), with additional control coefficient- step length, is introduced. Then the absolute stability theory is introduced to analyze the stability character of the standard PSO, the theoretical result indicates the PSO with constant step length can not always be stable, this may be one of the reason for premature convergence. Simulation results show the PSO-DSL is effective.

  11. Optimal time step for incompressible SPH

    NASA Astrophysics Data System (ADS)

    Violeau, Damien; Leroy, Agnès

    2015-05-01

    A classical incompressible algorithm for Smoothed Particle Hydrodynamics (ISPH) is analyzed in terms of critical time step for numerical stability. For this purpose, a theoretical linear stability analysis is conducted for unbounded homogeneous flows, leading to an analytical formula for the maximum CFL (Courant-Friedrichs-Lewy) number as a function of the Fourier number. This gives the maximum time step as a function of the fluid viscosity, the flow velocity scale and the SPH discretization size (kernel standard deviation). Importantly, the maximum CFL number at large Reynolds number appears twice smaller than with the traditional Weakly Compressible (WCSPH) approach. As a consequence, the optimal time step for ISPH is only five times larger than with WCSPH. The theory agrees very well with numerical data for two usual kernels in a 2-D periodic flow. On the other hand, numerical experiments in a plane Poiseuille flow show that the theory overestimates the maximum allowed time step for small Reynolds numbers.

  12. Improving particle filters in rainfall-runoff models: application of the resample-move step and development of the ensemble Gaussian particle filter

    NASA Astrophysics Data System (ADS)

    Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.

    2012-12-01

    The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.

  13. GNSS data filtering optimization for ionospheric observation

    NASA Astrophysics Data System (ADS)

    D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.

    2015-12-01

    In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15°, 20° or 30°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy

  14. Optimal edge filters explain human blur detection.

    PubMed

    McIlhagga, William H; May, Keith A

    2012-01-01

    Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222

  15. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-01-01

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  16. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-12-31

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  17. An Adaptive Fourier Filter for Relaxing Time Stepping Constraints for Explicit Solvers

    SciTech Connect

    Gelb, Anne; Archibald, Richard K

    2015-01-01

    Filtering is necessary to stabilize piecewise smooth solutions. The resulting diffusion stabilizes the method, but may fail to resolve the solution near discontinuities. Moreover, high order filtering still requires cost prohibitive time stepping. This paper introduces an adaptive filter that controls spurious modes of the solution, but is not unnecessarily diffusive. Consequently we are able to stabilize the solution with larger time steps, but also take advantage of the accuracy of a high order filter.

  18. Geomagnetic field modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Five individual 5 year mini-batch geomagnetic models were generated and two computer programs were developed to process the models. The first program computes statistics (mean sigma, weighted sigma) on the changes in the first derivatives (linear terms) of the spherical harmonic coefficients between mini-batches. The program ran successfully. The statistics are intended for use in computing the state noise matrix required in the information filter. The second program is the information filter. Most subroutines used in the filter were tested, but the coefficient statistics must be analyzed before the filter is run.

  19. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  20. Optimal filter bandwidth for pulse oximetry

    NASA Astrophysics Data System (ADS)

    Stuban, Norbert; Niwayama, Masatsugu

    2012-10-01

    Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.

  1. Optimal filter bandwidth for pulse oximetry.

    PubMed

    Stuban, Norbert; Niwayama, Masatsugu

    2012-10-01

    Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.

  2. Initial steps of inactivation at the K+ channel selectivity filter.

    PubMed

    Thomson, Andrew S; Heer, Florian T; Smith, Frank J; Hendron, Eunan; Bernèche, Simon; Rothberg, Brad S

    2014-04-29

    K(+) efflux through K(+) channels can be controlled by C-type inactivation, which is thought to arise from a conformational change near the channel's selectivity filter. Inactivation is modulated by ion binding near the selectivity filter; however, the molecular forces that initiate inactivation remain unclear. We probe these driving forces by electrophysiology and molecular simulation of MthK, a prototypical K(+) channel. Either Mg(2+) or Ca(2+) can reduce K(+) efflux through MthK channels. However, Ca(2+), but not Mg(2+), can enhance entry to the inactivated state. Molecular simulations illustrate that, in the MthK pore, Ca(2+) ions can partially dehydrate, enabling selective accessibility of Ca(2+) to a site at the entry to the selectivity filter. Ca(2+) binding at the site interacts with K(+) ions in the selectivity filter, facilitating a conformational change within the filter and subsequent inactivation. These results support an ionic mechanism that precedes changes in channel conformation to initiate inactivation.

  3. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  4. Design of optimal correlation filters for hybrid vision systems

    NASA Technical Reports Server (NTRS)

    Rajan, Periasamy K.

    1990-01-01

    Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

  5. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  6. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  7. Single-channel noise reduction using optimal rectangular filtering matrices.

    PubMed

    Long, Tao; Chen, Jingdong; Benesty, Jacob; Zhang, Zhenxi

    2013-02-01

    This paper studies the problem of single-channel noise reduction in the time domain and presents a block-based approach where a vector of the desired speech signal is recovered by filtering a frame of the noisy signal with a rectangular filtering matrix. With this formulation, the noise reduction problem becomes one of estimating an optimal filtering matrix. To achieve such estimation, a method is introduced to decompose a frame of the clean speech signal into two orthogonal components: One correlated and the other uncorrelated with the current desired speech vector to be estimated. Different optimization cost functions are then formulated from which non-causal optimal filtering matrices are derived. The relationships among these optimal filtering matrices are discussed. In comparison with the classical sample-based technique that uses only forward prediction, the block-based method presented in this paper exploits both the forward and backward prediction as well as the temporal interpolation and, therefore, can improve the noise reduction performance by fully taking advantage of the speech property of self correlation. There is also a side advantage of this block-based method as compared to the sample-based technique, i.e., it is computationally more efficient and, as a result, more suitable for practical implementation. PMID:23363124

  8. Single-channel noise reduction using optimal rectangular filtering matrices.

    PubMed

    Long, Tao; Chen, Jingdong; Benesty, Jacob; Zhang, Zhenxi

    2013-02-01

    This paper studies the problem of single-channel noise reduction in the time domain and presents a block-based approach where a vector of the desired speech signal is recovered by filtering a frame of the noisy signal with a rectangular filtering matrix. With this formulation, the noise reduction problem becomes one of estimating an optimal filtering matrix. To achieve such estimation, a method is introduced to decompose a frame of the clean speech signal into two orthogonal components: One correlated and the other uncorrelated with the current desired speech vector to be estimated. Different optimization cost functions are then formulated from which non-causal optimal filtering matrices are derived. The relationships among these optimal filtering matrices are discussed. In comparison with the classical sample-based technique that uses only forward prediction, the block-based method presented in this paper exploits both the forward and backward prediction as well as the temporal interpolation and, therefore, can improve the noise reduction performance by fully taking advantage of the speech property of self correlation. There is also a side advantage of this block-based method as compared to the sample-based technique, i.e., it is computationally more efficient and, as a result, more suitable for practical implementation.

  9. Ares-I Bending Filter Design using a Constrained Optimization Approach

    NASA Technical Reports Server (NTRS)

    Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth

    2008-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.

  10. Optimization of filtering schemes for broadband astro-combs.

    PubMed

    Chang, Guoqing; Li, Chih-Hao; Phillips, David F; Szentgyorgyi, Andrew; Walsworth, Ronald L; Kärtner, Franz X

    2012-10-22

    To realize a broadband, large-line-spacing astro-comb, suitable for wavelength calibration of astrophysical spectrographs, from a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three additional components: (1) one or more filter cavities to multiply the source-comb's repetition rate and thus line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highly nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb. In this paper we analyze the interplay of Fabry-Perot (FP) filter cavities with power amplifiers and nonlinear broadening fiber in the design of astro-combs optimized for radial-velocity (RV) calibration accuracy. We present analytic and numeric models and use them to evaluate a variety of FP filtering schemes (labeled as identical, co-prime, fraction-prime, and conjugate cavities), coupled to chirped-pulse amplification (CPA). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase RV error for spectrograph calibration. In general, filtering with two cavities prior to the CPA fiber amplifier outperforms an amplifier placed between the two cavities. In particular, filtering with conjugate cavities is able to provide <1 cm/s RV calibration error with >300 nm wavelength coverage. Such superior performance will facilitate the search for and characterization of Earth-like exoplanets, which requires <10 cm/s RV calibration error.

  11. Na-Faraday rotation filtering: The optimal point

    PubMed Central

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  12. Optimal Correlation Filters for Images with Signal-Dependent Noise

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Walkup, John F.

    1994-01-01

    We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.

  13. Single step optimization of manipulator maneuvers with variable structure control

    NASA Technical Reports Server (NTRS)

    Chen, N.; Dwyer, T. A. W., III

    1987-01-01

    One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.

  14. Opdic (optimized Peak, Distortion and Clutter) Detection Filter.

    NASA Astrophysics Data System (ADS)

    House, Gregory Philip

    1995-01-01

    Detection is considered. This involves determining regions of interest (ROIs) in a scene: the locations of multiple object classes in a scene in clutter when object distortions and contrast differences are present. High probability of detection P_{D} is essential and low P_{FA } is desirable since subsequent stages in the full system will only decrease P_{FA } and cannot increase P_{D }. Low resolution blob objects and objects with more internal detail are considered with both 3-D aspect view and depression angle distortions present. Extensive tests were conducted on 56 scenes with object classes not present in the training set. A modified MINACE (Minimum Noise and Correlation Energy) distortion-invariant filter was used. This minimizes correlation plane energy due to distortions and clutter while satisfying correlation peak constraint values for various object-aspect views. The filter was modified with a new object model (to give predictable output peak values) and a new correlated noise clutter model; a white Gaussian noise model of distortion was used; and a new techniques to increase the number of training set images (N _{T}) included in the filter were developed. Excellent results were obtained. However, the correlation plane distortion and clutter energy functions were found to become worse as N_{T } was increased and no rigorous method exists to select the best N_{T} (when to stop filter synthesis). A new OPDIC (Optimized Peak, Distortion, and Clutter) filter was thus devised. This filter retained the new object, clutter and distortion models noted. It minimizes the variance of the correlation peak values for all training set images (not just the N_{T} images). As N _{T} increases, the peak variance and the objective functions (correlation plane distortion and clutter energy) are all minimized. Thus, this new filter optimizes the desired functions and provides an easy way to stop filter synthesis (when the objective function is minimized). Tests show

  15. Structural optimization with flutter speed constraints using maximized step size

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Radovcich, N. A.; Hassig, H. J.

    1975-01-01

    A procedure is presented for the minimization of structural mass while satisfying flutter speed constraints. The procedure differs from other optimization methods in that the flutter speed is exactly satisfied at each resizing step, and the step size is determined by a direct minimization of the objective function (mass) for each set of flutter derivatives calculated. In conjunction with this method, a new move vector is suggested which results in a very efficient resizing procedure.

  16. Optimal color image restoration: Wiener filter and quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.

  17. Clever particle filters, sequential importance sampling and the optimal proposal

    NASA Astrophysics Data System (ADS)

    Snyder, Chris

    2014-05-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

  18. Optimized Paraunitary Filter Banks for Time-Frequency Channel Diagonalization

    NASA Astrophysics Data System (ADS)

    Ju, Ziyang; Hunziker, Thomas; Dahlhaus, Dirk

    2010-12-01

    We adopt the concept of channel diagonalization to time-frequency signal expansions obtained by DFT filter banks. As a generalization of the frequency domain channel representation used by conventional orthogonal frequency-division multiplexing receivers, the time-frequency domain channel diagonalization can be applied to time-variant channels and aperiodic signals. An inherent error in the case of doubly dispersive channels can be limited by choosing adequate windows underlying the filter banks. We derive a formula for the mean-squared sample error in the case of wide-sense stationary uncorrelated scattering (WSSUS) channels, which serves as objective function in the window optimization. Furthermore, an enhanced scheme for the parameterization of tight Gabor frames enables us to constrain the window in order to define paraunitary filter banks. We show that the design of windows optimized for WSSUS channels with known statistical properties can be formulated as a convex optimization problem. The performance of the resulting windows is investigated under different channel conditions, for different oversampling factors, and compared against the performance of alternative windows. Finally, a generic matched filter receiver incorporating the proposed channel diagonalization is discussed which may be essential for future reconfigurable radio systems.

  19. Grid Based Nonlinear Filtering Revisited: Recursive Estimation & Asymptotic Optimality

    NASA Astrophysics Data System (ADS)

    Kalogerias, Dionysios S.; Petropulu, Athina P.

    2016-08-01

    We revisit the development of grid based recursive approximate filtering of general Markov processes in discrete time, partially observed in conditionally Gaussian noise. The grid based filters considered rely on two types of state quantization: The \\textit{Markovian} type and the \\textit{marginal} type. We propose a set of novel, relaxed sufficient conditions, ensuring strong and fully characterized pathwise convergence of these filters to the respective MMSE state estimator. In particular, for marginal state quantizations, we introduce the notion of \\textit{conditional regularity of stochastic kernels}, which, to the best of our knowledge, constitutes the most relaxed condition proposed, under which asymptotic optimality of the respective grid based filters is guaranteed. Further, we extend our convergence results, including filtering of bounded and continuous functionals of the state, as well as recursive approximate state prediction. For both Markovian and marginal quantizations, the whole development of the respective grid based filters relies more on linear-algebraic techniques and less on measure theoretic arguments, making the presentation considerably shorter and technically simpler.

  20. A filter-based evolutionary algorithm for constrained optimization.

    SciTech Connect

    Clevenger, Lauren M.; Hart, William Eugene; Ferguson, Lauren Ann

    2004-02-01

    We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.

  1. System-level optimization of baseband filters for communication applications

    NASA Astrophysics Data System (ADS)

    Delgado-Restituto, Manuel; Fernandez-Bootello, Juan F.; Rodriguez-Vazquez, Angel

    2003-04-01

    In this paper, we present a design approach for the high-level synthesis of programmable continuous-time Gm-C and active-RC filters with optimum trade-off among dynamic range, distortion products generation, area consumption and power dissipation, thus meeting the needs of more demanding baseband filter realizations. Further, the proposed technique guarantees that under all programming configurations, transconductors (in Gm-C filters) and resistors (in active-RC filters) as well as capacitors, are related by integer ratios in order to reduce the sensitivity to mismatch of the monolithic implementation. In order to solve the aforementioned trade-off, the filter must be properly scaled at each configuration. It means that filter node impedances must be conveniently altered so that the noise contribution of each node to the filter output be as low as possible, while avoiding that peak amplitudes at such nodes be so high as to drive active circuits into saturation. Additionally, in order to not degrade the distortion performance of the filter (in particular, if it is implemented using Gm-C techniques) node impedances can not be scaled independently from each other but restrictions must be imposed according to the principle of nonlinear cancellation. Altogether, the high-level synthesis can be seen as a constrained optimization problem where some of the variables, namely, the ratios among similar components, are restricted to discrete values. The proposed approach to accomplish optimum filter scaling under all programming configurations, relies on matrix methods for network representation, which allows an easy estimation of performance features such as dynamic range and power dissipation, as well as other network properties such as sensitivity to parameter variations and non-ideal effects of integrators blocks; and the use of a simulated annealing algorithm to explore the design space defined by the transfer and group delay specifications. It must be noted that such

  2. A high-contrast imaging polarimeter with a stepped-transmission filter based coronagraph

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Chao; Ren, De-Qing; Zhu, Yong-Tian; Dou, Jiang-Pei; Guo, Jing

    2016-05-01

    The light reflected from planets is polarized mainly due to Rayleigh scattering, but starlight is normally unpolarized. Thus it provides an approach to enhance the imaging contrast by inducing the imaging polarimetry technique. In this paper, we propose a high-contrast imaging polarimeter that is optimized for the direct imaging of exoplanets, combined with our recently developed stepped-transmission filter based coronagraph. Here we present the design and calibration method of the polarimetry system and the associated test of its high-contrast performance. In this polarimetry system, two liquid crystal variable retarders (LCVRs) act as a polarization modulator, which can extract the polarized signal. We show that our polarimeter can achieve a measurement accuracy of about 0.2% at a visible wavelength (632.8 nm) with linearly polarized light. Finally, the whole system demonstrates that a contrast of 10-9 at 5λ/D is achievable, which can be used for direct imaging of Jupiter-like planets with a space telescope.

  3. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  4. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  5. An optimization-based parallel particle filter for multitarget tracking

    NASA Astrophysics Data System (ADS)

    Sutharsan, S.; Sinha, A.; Kirubarajan, T.; Farooq, M.

    2005-09-01

    Particle filter based estimation is becoming more popular because it has the capability to effectively solve nonlinear and non-Gaussian estimation problems. However, the particle filter has high computational requirements and the problem becomes even more challenging in the case of multitarget tracking. In order to perform data association and estimation jointly, typically an augmented state vector of target dynamics is used. As the number of targets increases, the computation required for each particle increases exponentially. Thus, parallelization is a possibility in order to achieve the real time feasibility in large-scale multitarget tracking applications. In this paper, we present a real-time feasible scheduling algorithm that minimizes the total computation time for the bus connected heterogeneous primary-secondary architecture. This scheduler is capable of selecting the optimal number of processors from a large pool of secondary processors and mapping the particles among the selected processors. Furthermore, we propose a less communication intensive parallel implementation of the particle filter without sacrificing tracking accuracy using an efficient load balancing technique, in which optimal particle migration is ensured. In this paper, we present the mathematical formulations for scheduling the particles as well as for particle migration via load balancing. Simulation results show the tracking performance of our parallel particle filter and the speedup achieved using parallelization.

  6. Novel two-step filtering scheme for a logging-while-drilling system

    NASA Astrophysics Data System (ADS)

    Zhao, Qingjie; Zhang, Baojun; Hu, Huosheng

    2009-09-01

    A logging-while-drilling (LWD) system is usually deployed in the oil drilling process in order to provide real-time monitoring of the position and orientation of a hole. Encoded signals including the data coming from down-hole sensors are inevitably contaminated during their collection and transmission to the surface. Before decoding the signals into different physical parameters, the noise should be filtered out to guarantee that correct parameter values could be acquired. In this paper, according to the characteristics of LWD signals, we propose a novel two-step filtering scheme in which a dynamic part mean filtering algorithm is proposed to separate the direct current components and a windowed limited impulse response (FIR) algorithm is deployed to filter out the high-frequency noise. The scheme has been integrated into the surface processing software and the whole LWD system for the horizontal well drilling. Some experimental results are presented to show the feasibility and good performance of the proposed two-step filtering scheme.

  7. A Peptide Filtering Relation Quantifies MHC Class I Peptide Optimization

    PubMed Central

    Goldstein, Leonard D.; Howarth, Mark; Cardelli, Luca; Emmott, Stephen; Elliott, Tim; Werner, Joern M.

    2011-01-01

    Major Histocompatibility Complex (MHC) class I molecules enable cytotoxic T lymphocytes to destroy virus-infected or cancerous cells, thereby preventing disease progression. MHC class I molecules provide a snapshot of the contents of a cell by binding to protein fragments arising from intracellular protein turnover and presenting these fragments at the cell surface. Competing fragments (peptides) are selected for cell-surface presentation on the basis of their ability to form a stable complex with MHC class I, by a process known as peptide optimization. A better understanding of the optimization process is important for our understanding of immunodominance, the predominance of some T lymphocyte specificities over others, which can determine the efficacy of an immune response, the danger of immune evasion, and the success of vaccination strategies. In this paper we present a dynamical systems model of peptide optimization by MHC class I. We incorporate the chaperone molecule tapasin, which has been shown to enhance peptide optimization to different extents for different MHC class I alleles. Using a combination of published and novel experimental data to parameterize the model, we arrive at a relation of peptide filtering, which quantifies peptide optimization as a function of peptide supply and peptide unbinding rates. From this relation, we find that tapasin enhances peptide unbinding to improve peptide optimization without significantly delaying the transit of MHC to the cell surface, and differences in peptide optimization across MHC class I alleles can be explained by allele-specific differences in peptide binding. Importantly, our filtering relation may be used to dynamically predict the cell surface abundance of any number of competing peptides by MHC class I alleles, providing a quantitative basis to investigate viral infection or disease at the cellular level. We exemplify this by simulating optimization of the distribution of peptides derived from Human

  8. Multidisciplinary Analysis and Optimization Generation 1 and Next Steps

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia Gutierrez

    2008-01-01

    The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.

  9. A stepped-impedance bandstop filter with extended upper passbands and improved pass-band reflections

    NASA Astrophysics Data System (ADS)

    Zuo, Xiaoying; Yu, Jianguo

    2016-09-01

    A high-performance planar bandstop filter with extended upper passbands and improved pass-band return loss is proposed in this article. In this proposed bandstop filter, a novel three-section stepped-impedance structure is suggested to improve the pass-band reflections without affecting the desired band-stop and extended upper passband performances. The analysis and design considerations of this filter are provided while the proposed design approach is verified by full-wave simulation, microstrip implementation and accurate measurement of a typical fabricated filter operating at 1 GHz(fo). Compared to the conventional one, the proposed bandstop filter has main and obvious advantages of simple single-layer structure, perfect band-stop filtering performance (Suppression of better than 20 dB), excellent low/high pass-band return loss (Reflection of lower than -17 dB) in the extended upper passbands(Larger than 5.96 fo), and flat group-delay transmission (Variations of smaller than 0.22 ns).

  10. "The Design of a Compact, Wide Spurious-Suppression Bandwidth Bandpass Filter Using Stepped Impedance Resonators"

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop; Wollack, Edward J.; Doiron, Terence; Papapolymerou, John; Laskar, Joy

    2005-01-01

    We propose an analytical design for a microstrip broadband spurious-suppression filter. The proposed design uses every section of the transmission lines as both a coupling and a spurious suppression element, which creates a very compact, planar filter. While a traditional filter length is greater than the multiple of the quarter wavelength at the center passband frequency (lambda(sub g)/4), the proposed filter length is less than (order n(Ssup th) + l)center dot lambda(sub g)/8. The filter s spurious response and physical dimension are controlled by the step impedance ratio (R) between two transmission line sections as a lambda(sub g)/4 resonator. The experimental result shows that, with R of 0.2, the out-of-band attenuation is greater than 40 dB; and the first spurious mode is shifted to more than 5 times the fundamental frequency. Moreover, it is the most compact planar filter design to date. The results also indicate a low in-band insertion loss.

  11. A multi-dimensional procedure for BNCT filter optimization

    SciTech Connect

    Lille, R.A.

    1998-02-01

    An initial version of an optimization code utilizing two-dimensional radiation transport methods has been completed. This code is capable of predicting material compositions of a beam tube-filter geometry which can be used in a boron neutron capture therapy treatment facility to improve the ratio of the average radiation dose in a brain tumor to that in the healthy tissue surrounding the tumor. The optimization algorithm employed by the code is very straightforward. After an estimate of the gradient of the dose ratio with respect to the nuclide densities in the beam tube-filter geometry is obtained, changes in the nuclide densities are made based on: (1) the magnitude and sign of the components of the dose ratio gradient, (2) the magnitude of the nuclide densities, (3) the upper and lower bound of each nuclide density, and (4) the linear constraint that the sum of the nuclide density fractions in each material zone be less than or equal to 1.0. A local optimal solution is assumed to be found when one of the following conditions is satisfied in every material zone: (1) the maximum positive component of the gradient corresponds to a nuclide at its maximum density and the sum of the density fractions equals 1.0 or, and (2) the positive and negative components of the gradient correspond to nuclides densities at their upper and lower bounds, respectively, and the remaining components of the gradient are sufficiently small. The optimization procedure has been applied to a beam tube-filter geometry coupled to a simple tumor-patient head model and an improvement of 50% in the dose ratio was obtained.

  12. Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.

    PubMed

    Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal

    2013-11-01

    In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters.

  13. Optimal subband Kalman filter for normal and oesophageal speech enhancement.

    PubMed

    Ishaq, Rizwan; García Zapirain, Begoña

    2014-01-01

    This paper presents the single channel speech enhancement system using subband Kalman filtering by estimating optimal Autoregressive (AR) coefficients and variance for speech and noise, using Weighted Linear Prediction (WLP) and Noise Weighting Function (NWF). The system is applied for normal and Oesophageal speech signals. The method is evaluated by Perceptual Evaluation of Speech Quality (PESQ) score and Signal to Noise Ratio (SNR) improvement for normal speech and Harmonic to Noise Ratio (HNR) for Oesophageal Speech (OES). Compared with previous systems, the normal speech indicates 30% increase in PESQ score, 4 dB SNR improvement and OES shows 3 dB HNR improvement. PMID:25227070

  14. Optimal subband Kalman filter for normal and oesophageal speech enhancement.

    PubMed

    Ishaq, Rizwan; García Zapirain, Begoña

    2014-01-01

    This paper presents the single channel speech enhancement system using subband Kalman filtering by estimating optimal Autoregressive (AR) coefficients and variance for speech and noise, using Weighted Linear Prediction (WLP) and Noise Weighting Function (NWF). The system is applied for normal and Oesophageal speech signals. The method is evaluated by Perceptual Evaluation of Speech Quality (PESQ) score and Signal to Noise Ratio (SNR) improvement for normal speech and Harmonic to Noise Ratio (HNR) for Oesophageal Speech (OES). Compared with previous systems, the normal speech indicates 30% increase in PESQ score, 4 dB SNR improvement and OES shows 3 dB HNR improvement.

  15. Quantum demolition filtering and optimal control of unstable systems.

    PubMed

    Belavkin, V P

    2012-11-28

    A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one. PMID:23091216

  16. On one-step worst-case optimal trisection in univariate bi-objective Lipschitz optimization

    NASA Astrophysics Data System (ADS)

    Žilinskas, Antanas; Gimbutienė, Gražina

    2016-06-01

    The bi-objective Lipschitz optimization with univariate objectives is considered. The concept of the tolerance of the lower Lipschitz bound over an interval is generalized to arbitrary subintervals of the search region. The one-step worst-case optimality of trisecting an interval with respect to the resulting tolerance is established. The theoretical investigation supports the previous usage of trisection in other algorithms. The trisection-based algorithm is introduced. Some numerical examples illustrating the performance of the algorithm are provided.

  17. [Characteristic wavelength variable optimization of near-infrared spectroscopy based on Kalman filtering].

    PubMed

    Wang, Li-Qi; Ge, Hui-Fang; Li, Gui-Bin; Yu, Dian-Yu; Hu, Li-Zhi; Jiang, Lian-Zhou

    2014-04-01

    Combining classical Kalman filter with NIR analysis technology, a new method of characteristic wavelength variable selection, namely Kalman filtering method, is presented. The principle of Kalman filter for selecting optimal wavelength variable was analyzed. The wavelength selection algorithm was designed and applied to NIR detection of soybean oil acid value. First, the PLS (partial leastsquares) models were established by using different absorption bands of oil. The 4 472-5 000 cm(-1) characteristic band of oil acid value, including 132 wavelengths, was selected preliminarily. Then the Kalman filter was used to select characteristic wavelengths further. The PLS calibration model was established using selected 22 characteristic wavelength variables, the determination coefficient R2 of prediction set and RMSEP (root mean squared error of prediction) are 0.970 8 and 0.125 4 respectively, equivalent to that of 132 wavelengths, however, the number of wavelength variables was reduced to 16.67%. This algorithm is deterministic iteration, without complex parameters setting and randomicity of variable selection, and its physical significance was well defined. The modeling using a few selected characteristic wavelength variables which affected modeling effect heavily, instead of total spectrum, can make the complexity of model decreased, meanwhile the robustness of model improved. The research offered important reference for developing special oil near infrared spectroscopy analysis instruments on next step.

  18. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  19. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10. PMID:25826934

  20. Effects of rate-limiting steps in transcription initiation on genetic filter motifs.

    PubMed

    Häkkinen, Antti; Tran, Huy; Yli-Harja, Olli; Ribeiro, Andre S

    2013-01-01

    The behavior of genetic motifs is determined not only by the gene-gene interactions, but also by the expression patterns of the constituent genes. Live single-molecule measurements have provided evidence that transcription initiation is a sequential process, whose kinetics plays a key role in the dynamics of mRNA and protein numbers. The extent to which it affects the behavior of cellular motifs is unknown. Here, we examine how the kinetics of transcription initiation affects the behavior of motifs performing filtering in amplitude and frequency domain. We find that the performance of each filter is degraded as transcript levels are lowered. This effect can be reduced by having a transcription process with more steps. In addition, we show that the kinetics of the stepwise transcription initiation process affects features such as filter cutoffs. These results constitute an assessment of the range of behaviors of genetic motifs as a function of the kinetics of transcription initiation, and thus will aid in tuning of synthetic motifs to attain specific characteristics without affecting their protein products.

  1. A cascaded two-step Kalman filter for estimation of human body segment orientation using MEMS-IMU.

    PubMed

    Zihajehzadeh, S; Loh, D; Lee, M; Hoskinson, R; Park, E J

    2014-01-01

    Orientation of human body segments is an important quantity in many biomechanical analyses. To get robust and drift-free 3-D orientation, raw data from miniature body worn MEMS-based inertial measurement units (IMU) should be blended in a Kalman filter. Aiming at less computational cost, this work presents a novel cascaded two-step Kalman filter orientation estimation algorithm. Tilt angles are estimated in the first step of the proposed cascaded Kalman filter. The estimated tilt angles are passed to the second step of the filter for yaw angle calculation. The orientation results are benchmarked against the ones from a highly accurate tactical grade IMU. Experimental results reveal that the proposed algorithm provides robust orientation estimation in both kinematically and magnetically disturbed conditions.

  2. [Regression evaluation index intelligent filter method for quick optimization of chromatographic separation conditions].

    PubMed

    Gan, Wei; Liu, Xuemin; Sun, Jing

    2015-02-01

    This paper presents a method of regression evaluation index intelligent filter method (REIFM) for quick optimization of chromatographic separation conditions. The hierarchical chromatography response function was used as the chromatography-optimization index. The regression model was established by orthogonal regression design. The chromatography-optimization index was filtered by the intelligent filter program, and the optimization of the separation conditions was obtained. The experimental results showed that the average relative deviation between the experimental values and the predicted values was 0. 18% at the optimum and the optimization results were satisfactory.

  3. HEPA (high efficiency particulate air) filter optimization/implementation

    SciTech Connect

    Nenni, J.A.

    1988-02-10

    Prefilters were installed in high efficiency particularly air (HEPA) filter plenums at the Rocky Flats Plant. It was determined that prefiltration systems would extend the life of first-stage HEPA filters and reduce the amount of HEPA filter waste in the transuranic waste category. A remote handling system was designed to remove prefilters without entry into the plenum to reduce secondary waste and decrease exposure to Filter Technicians. 3 figs., 4 tabs.

  4. A Triple-band Bandpass Filter using Tri-section Step-impedance and Capacitively Loaded Step-impedance Resonators for GSM, WiMAX, and WLAN systems

    NASA Astrophysics Data System (ADS)

    Chomtong, P.; Akkaraekthalin, P.

    2014-05-01

    This paper presents a triple-band bandpass filter for applications of GSM, WiMAX, and WLAN systems. The proposed filter comprises of the tri-section step-impedance and capacitively loaded step-impedance resonators, which are combined using the cross coupling technique. Additionally, tapered lines are used to connect at both ports of the filter in order to enhance matching for the tri-band resonant frequencies. The filter can operate at the resonant frequencies of 1.8 GHz, 3.7 GHz, and 5.5 GHz. At resonant frequencies, the measured values of S11 are -17.2 dB, -33.6 dB, and -17.9 dB, while the measured values of S21 are -2.23 dB, -2.98 dB, and -3.31 dB, respectively. Moreover, the presented filter has compact size compared with the conventional open-loop cross coupling triple band bandpass filters

  5. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGES

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  6. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    SciTech Connect

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  7. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  8. Optease Vena Cava Filter Optimal Indwelling Time and Retrievability

    SciTech Connect

    Rimon, Uri Bensaid, Paul Golan, Gil Garniek, Alexander Khaitovich, Boris; Dotan, Zohar; Konen, Eli

    2011-06-15

    The purpose of this study was to assess the indwelling time and retrievability of the Optease IVC filter. Between 2002 and 2009, a total of 811 Optease filters were inserted: 382 for prophylaxis in multitrauma patients and 429 for patients with venous thromboembolic (VTE) disease. In 139 patients [97 men and 42 women; mean age, 36 (range, 17-82) years], filter retrieval was attempted. They were divided into two groups to compare change in retrieval policy during the years: group A, 60 patients with filter retrievals performed before December 31 2006; and group B, 79 patients with filter retrievals from January 2007 to October 2009. A total of 128 filters were successfully removed (57 in group A, and 71 in group B). The mean filter indwelling time in the study group was 25 (range, 3-122) days. In group A the mean indwelling time was 18 (range, 7-55) days and in group B 31 days (range, 8-122). There were 11 retrieval failures: 4 for inability to engage the filter hook and 7 for inability to sheathe the filter due to intimal overgrowth. The mean indwelling time of group A retrieval failures was 16 (range, 15-18) days and in group B 54 (range, 17-122) days. Mean fluoroscopy time for successful retrieval was 3.5 (range, 1-16.6) min and for retrieval failures 25.2 (range, 7.2-62) min. Attempts to retrieve the Optease filter can be performed up to 60 days, but more failures will be encountered with this approach.

  9. Using a scale selective tendency filter and forward-backward time stepping to calculate consistent semi-Lagrangian trajectories

    NASA Astrophysics Data System (ADS)

    Alerskans, Emy; Kaas, Eigil

    2016-04-01

    In semi-Lagrangian models used for climate and NWP the trajectories are normally/often determined kinematically. Here we propose a new method for calculating trajectories in a more dynamically consistent way by pre-integrating the governing equations in a pseudo-Lagrangian manner using a short time step. Only non-advective adiabatic terms are included in this calculation, i.e., the Coriolis and pressure gradient force plus gravity in the momentum equations, and the divergence term in the continuity equation. This integration is performed with a forward-backward time step. Optionally, the tendencies are filtered with a local space filter, which reduces the phase speed of short wave gravity and sound waves. The filter relaxes the time step limitation related to high frequency oscillations without compromising locality of the solution. The filter can be considered as an alternative to less local or global semi-implicit solvers. Once trajectories are estimated over a complete long advective time step the full set of governing equations is stepped forward using these trajectories in combination with a flux form semi-Lagrangian formulation of the equations. The methodology is designed to improve consistency and scalability on massively parallel systems, although here it has only been verified that the technique produces realistic results in a shallow water model and a 2D model based on the full Euler equations.

  10. Optimized digital filtering techniques for radiation detection with HPGe detectors

    NASA Astrophysics Data System (ADS)

    Salathe, Marco; Kihm, Thomas

    2016-02-01

    This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.

  11. Optimization of FIR Digital Filters Using a Real Parameter Parallel Genetic Algorithm and Implementations.

    NASA Astrophysics Data System (ADS)

    Xu, Dexiang

    This dissertation presents a novel method of designing finite word length Finite Impulse Response (FIR) digital filters using a Real Parameter Parallel Genetic Algorithm (RPPGA). This algorithm is derived from basic Genetic Algorithms which are inspired by natural genetics principles. Both experimental results and theoretical studies in this work reveal that the RPPGA is a suitable method for determining the optimal or near optimal discrete coefficients of finite word length FIR digital filters. Performance of RPPGA is evaluated by comparing specifications of filters designed by other methods with filters designed by RPPGA. The parallel and spatial structures of the algorithm result in faster and more robust optimization than basic genetic algorithms. A filter designed by RPPGA is implemented in hardware to attenuate high frequency noise in a data acquisition system for collecting seismic signals. These studies may lead to more applications of the Real Parameter Parallel Genetic Algorithms in Electrical Engineering.

  12. Reduced Complexity HMM Filtering With Stochastic Dominance Bounds: A Convex Optimization Approach

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Vikram; Rojas, Cristian R.

    2014-12-01

    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds.

  13. Bio-desulfurization of biogas using acidic biotrickling filter with dissolved oxygen in step feed recirculation.

    PubMed

    Chaiprapat, Sumate; Charnnok, Boonya; Kantachote, Duangporn; Sung, Shihwu

    2015-03-01

    Triple stage and single stage biotrickling filters (T-BTF and S-BTF) were operated with oxygenated liquid recirculation to enhance bio-desulfurization of biogas. Empty bed retention time (EBRT 100-180 s) and liquid recirculation velocity (q 2.4-7.1 m/h) were applied. H2S removal and sulfuric acid recovery increased with higher EBRT and q. But the highest q at 7.1 m/h induced large amount of liquid through the media, causing a reduction in bed porosity in S-BTF and H2S removal. Equivalent performance of S-BTF and T-BTF was obtained under the lowest loading of 165 gH2S/m(3)/h. In the subsequent continuous operation test, it was found that T-BTF could maintain higher H2S elimination capacity and removal efficiency at 175.6±41.6 gH2S/m(3)/h and 89.0±6.8% versus S-BTF at 159.9±42.8 gH2S/m(3)/h and 80.1±10.2%, respectively. Finally, the relationship between outlet concentration and bed height was modeled. Step feeding of oxygenated liquid recirculation in multiple stages clearly demonstrated an advantage for sulfide oxidation. PMID:25569031

  14. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J.; Kapteyn, Henry C.

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  15. An Efficient and Optimal Filter for Identifying Point Sources in Millimeter/Submillimeter Wavelength Sky Maps

    NASA Astrophysics Data System (ADS)

    Perera, T. A.; Wilson, G. W.; Scott, K. S.; Austermann, J. E.; Schaar, J. R.; Mancera, A.

    2013-07-01

    A new technique for reliably identifying point sources in millimeter/submillimeter wavelength maps is presented. This method accounts for the frequency dependence of noise in the Fourier domain as well as nonuniformities in the coverage of a field. This optimal filter is an improvement over commonly-used matched filters that ignore coverage gradients. Treating noise variations in the Fourier domain as well as map space is traditionally viewed as a computationally intensive problem. We show that the penalty incurred in terms of computing time is quite small due to casting many of the calculations in terms of FFTs and exploiting the absence of sharp features in the noise spectra of observations. Practical aspects of implementing the optimal filter are presented in the context of data from the AzTEC bolometer camera. The advantages of using the new filter over the standard matched filter are also addressed in terms of a typical AzTEC map.

  16. Multivariable frequency response methods for optimal Kalman-Bucy filters with applications to radar tracking systems

    NASA Astrophysics Data System (ADS)

    Arcasoy, C. C.

    1992-11-01

    The problem of multi-output, infinite-time, linear time-invariant optimal Kalman-Bucy filter both in continuous and discrete-time cases in frequency domain is addressed. A simple new algorithm is given for the analytical solution to the steady-state gain of the optimum filter based on a transfer function approach. The algorithm is based on spectral factorization of observed spectral density matrix of the filter which generates directly the return-difference matrix of the optimal filter. The method is more direct than by algebraic Riccati equation solution and can easily be implemented on digital computer. The design procedure is illustrated by examples and closed-form solution of ECV and ECA radar tracking filters are considered as an application of the method.

  17. Two-step incoherent optical method for the realization of a p filter

    NASA Astrophysics Data System (ADS)

    Murata, K.; Han, C.-Y.

    1983-11-01

    A simple incoherent optical method is presented for obtaining the two-dimensional rho filter (rho being the radial distance in the spatial-frequency domain). Here, two statistical filters (Sayanagi, 1958; Lohmann, 1959) having different cutoff frequencies are used as the low-pass filters; the required rho filter is obtained by subtraction of the optical transfer functions. The statistical amplitude filters are made up of a multiplicity of opague disks randomly distributed over the total aperture. The filters are placed in front of an imaging lens. By properly selecting the diameter of the opaque disks and the focal length of the lens, it becomes possible to control the effective cutoff frequency of the low-pass filter. It is pointed out that the statistical filters are easily fabricated. Another advantage is that the input image can be introduced by the cathod-ray-tube display of a TV system, making it possible to realize nearly real-time processing for the image sharpening.

  18. Linear adaptive noise-reduction filters for tomographic imaging: Optimizing for minimum mean square error

    SciTech Connect

    Sun, W Y

    1993-04-01

    This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

  19. Finite element based optimization study on hydroformed stepped tube

    NASA Astrophysics Data System (ADS)

    Harisankar, K. R.; Omar, A.; Narasimhan, K.

    2016-08-01

    Tube hydroforming process is an advanced manufacturing process in which tube is placed in between the dies and deformed with the help of hydraulic pressure. A sound tube hydroformed part depends upon die conditions, material properties and process conditions. In this work, a finite element study, along with response surface methodology (RSM) for designing the simulation, has been used to construct models with loading path, friction, anisotropic index, strain hardening exponent and tube thickness. The responses studied are the die corner radius filling and strain non-uniformity index (SNI) chosen in each step of the tube with maximum 30% thinning as stopping criteria. The factors effect and their interactions on each response were determined and analysed.

  20. New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.

    PubMed

    Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng

    2016-05-16

    The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters. PMID:27409895

  1. On the application of optimal wavelet filter banks for ECG signal classification

    NASA Astrophysics Data System (ADS)

    Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvão, R. K. H.

    2014-03-01

    This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.

  2. Optimized filtering reduces the error rate in detecting genomic variants by short-read sequencing.

    PubMed

    Reumers, Joke; De Rijk, Peter; Zhao, Hui; Liekens, Anthony; Smeets, Dominiek; Cleary, John; Van Loo, Peter; Van Den Bossche, Maarten; Catthoor, Kirsten; Sabbe, Bernard; Despierre, Evelyn; Vergote, Ignace; Hilbush, Brian; Lambrechts, Diether; Del-Favero, Jurgen

    2012-01-01

    Distinguishing single-nucleotide variants (SNVs) from errors in whole-genome sequences remains challenging. Here we describe a set of filters, together with a freely accessible software tool, that selectively reduce error rates and thereby facilitate variant detection in data from two short-read sequencing technologies, Complete Genomics and Illumina. By sequencing the nearly identical genomes from monozygotic twins and considering shared SNVs as 'true variants' and discordant SNVs as 'errors', we optimized thresholds for 12 individual filters and assessed which of the 1,048 filter combinations were effective in terms of sensitivity and specificity. Cumulative application of all effective filters reduced the error rate by 290-fold, facilitating the identification of genetic differences between monozygotic twins. We also applied an adapted, less stringent set of filters to reliably identify somatic mutations in a highly rearranged tumor and to identify variants in the NA19240 HapMap genome relative to a reference set of SNVs. PMID:22178994

  3. Empirical Determination of Optimal Parameters for Sodium Double-Edge Magneto-Optic Filters

    NASA Astrophysics Data System (ADS)

    Barry, Ian F.; Huang, Wentao; Smith, John A.; Chu, Xinzhao

    2016-06-01

    A method is proposed for determining the optimal temperature and magnetic field strength used to condition a sodium vapor cell for use in a sodium Double-Edge Magneto-Optic Filter (Na-DEMOF). The desirable characteristics of these filters are first defined and then analyzed over a range of temperatures and magnetic field strengths, using an IDL Faraday filter simulation adapted for the Na-DEMOF. This simulation is then compared to real behavior of a Na-DEMOF constructed for use with the Chu Research Group's STAR Na Doppler resonance-fluorescence lidar for lower atmospheric observations.

  4. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486

  5. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively.

  6. Optimal filters - A unified approach for SNR and PCE. [Peak-To-Correlation-Energy

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1993-01-01

    A unified approach for a general metric that encompasses both the signal-to-noise ratio (SNR) and the peak-to-correlation (PCE) ratio in optical correlators is described. In this approach, the connection between optimizing SNR and optimizing PCE is achieved by considering a metric in which the central correlation irradiance is divided by the total energy of the correlation plane. The peak-to-total energy (PTE) is shown to be optimized similarly to SNR and PCE. Since PTE is a function of the search values G and beta, the optimal filter is determined with only a two-dimensional search.

  7. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  8. Optimal matched filter design for ultrasonic NDE of coarse grain materials

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Hayward, Gordon

    2016-02-01

    Coarse grain materials are widely used in a variety of key industrial sectors like energy, oil and gas, and aerospace due to their attractive properties. However, when these materials are inspected using ultrasound, the flaw echoes are usually contaminated by high-level, correlated grain noise originating from the material microstructures, which is time-invariant and demonstrates similar spectral characteristics as flaw signals. As a result, the reliable inspection of such materials is highly challenging. In this paper, we present a method for reliable ultrasonic non-destructive evaluation (NDE) of coarse grain materials using matched filters, where the filter is designed to approximate and match the unknown defect echoes, and a particle swarm optimization (PSO) paradigm is employed to search for the optimal parameters in the filter response with an objective to maximise the output signal-to-noise ratio (SNR). Experiments with a 128-element 5MHz transducer array on mild steel and INCONEL Alloy 617 samples are conducted, and the results confirm that the SNR of the images is improved by about 10-20 dB if the optimized matched filter is applied to all the A-scan waveforms prior to image formation. Furthermore, the matched filter can be implemented in real-time with low extra computational cost.

  9. Improved design and optimization of subsurface flow constructed wetlands and sand filters

    NASA Astrophysics Data System (ADS)

    Brovelli, A.; Carranza-Díaz, O.; Rossi, L.; Barry, D. A.

    2010-05-01

    Subsurface flow constructed wetlands and sand filters are engineered systems capable of eliminating a wide range of pollutants from wastewater. These devices are easy to operate, flexible and have low maintenance costs. For these reasons, they are particularly suitable for small settlements and isolated farms and their use has substantially increased in the last 15 years. Furthermore, they are also becoming used as a tertiary - polishing - step in traditional treatment plants. Recent work observed that research is however still necessary to understand better the biogeochemical processes occurring in the porous substrate, their mutual interactions and feedbacks, and ultimately to identify the optimal conditions to degrade or remove from the wastewater both traditional and anthropogenic recalcitrant pollutants, such as hydrocarbons, pharmaceuticals, personal care products. Optimal pollutant elimination is achieved if the contact time between microbial biomass and the contaminated water is sufficiently long. The contact time depends on the hydraulic residence time distribution (HRTD) and is controlled by the hydrodynamic properties of the system. Previous reports noted that poor hydrodynamic behaviour is frequent, with water flowing mainly through preferential paths resulting in a broad HRTD. In such systems the flow rate must be decreased to allow a sufficient proportion of the wastewater to experience the minimum residence time. The pollutant removal efficiency can therefore be significantly reduced, potentially leading to the failure of the system. The aim of this work was to analyse the effect of the heterogeneous distribution of the hydraulic properties of the porous substrate on the HRTD and treatment efficiency, and to develop an improved design methodology to reduce the risk of system failure and to optimize existing systems showing poor hydrodynamics. Numerical modelling was used to evaluate the effect of substrate heterogeneity on the breakthrough curves of

  10. Performance optimization of total momentum filtering double-resonance energy selective electron heat pump

    NASA Astrophysics Data System (ADS)

    Ding, Ze-Min; Chen, Lin-Gen; Ge, Yan-Lin; Sun, Feng-Rui

    2016-04-01

    A theoretical model for energy selective electron (ESE) heat pumps operating with two-dimensional electron reservoirs is established in this study. In this model, a double-resonance energy filter operating with a total momentum filtering mechanism is considered for the transmission of electrons. The optimal thermodynamic performance of the ESE heat pump devices is also investigated. Numerical calculations show that the heating load of the device with two resonances is larger, whereas the coefficient of performance (COP) is lower than the ESE heat pump when considering a single-resonance filter. The performance characteristics of the ESE heat pumps in the total momentum filtering condition are generally superior to those with a conventional filtering mechanism. In particular, the performance characteristics of the ESE heat pumps considering a conventional filtering mechanism are vastly different from those of a device with total momentum filtering, which is induced by extra electron momentum in addition to the horizontal direction. Parameters such as resonance width and energy spacing are found to be associated with the performance of the electron system.

  11. Plate/shell topological optimization subjected to linear buckling constraints by adopting composite exponential filtering function

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang

    2016-08-01

    In this paper, a model of topology optimization with linear buckling constraints is established based on an independent and continuous mapping method to minimize the plate/shell structure weight. A composite exponential function (CEF) is selected as filtering functions for element weight, the element stiffness matrix and the element geometric stiffness matrix, which recognize the design variables, and to implement the changing process of design variables from "discrete" to "continuous" and back to "discrete". The buckling constraints are approximated as explicit formulations based on the Taylor expansion and the filtering function. The optimization model is transformed to dual programming and solved by the dual sequence quadratic programming algorithm. Finally, three numerical examples with power function and CEF as filter function are analyzed and discussed to demonstrate the feasibility and efficiency of the proposed method.

  12. Optimal ? and ? mode-independent filters for generalised Bernoulli jump systems

    NASA Astrophysics Data System (ADS)

    Fioravanti, A. R.; Gonçalves, A. P. C.; Geromel, J. C.

    2015-02-01

    This paper provides the optimal solution of the filtering design problem for a special class of discrete-time Markov jump linear systems whose transition probability matrix has identical rows. In the two-mode case, this is equivalent to saying that the random variable has a Bernoulli distribution. For that class of dynamic systems we design, with the help of new necessary and sufficient linear matrix inequality conditions, ? and ? optimal mode-independent filters with the same order of the plant. As a first proposal available in the literature, for partial information characterised by cluster availability of the mode, we also show it is possible to design optimal full-order linear filters. If some plant matrices do not vary within the same cluster, we show that the optimal filter exhibits the internal model structure. We complete the results with illustrative examples. A realistic practical application considering sensors connected to a network using a communication protocol such as the Token Ring is included in order to put in evidence the usefulness of the theoretical results.

  13. Optimized split-step method for modeling nonlinear pulse propagation in fiber Bragg gratings

    SciTech Connect

    Toroker, Zeev; Horowitz, Moshe

    2008-03-15

    We present an optimized split-step method for solving nonlinear coupled-mode equations that model wave propagation in nonlinear fiber Bragg gratings. By separately controlling the spatial and the temporal step size of the solution, we could significantly decrease the run time duration without significantly affecting the result accuracy. The accuracy of the method and the dependence of the error on the algorithm parameters are studied in several examples. Physical considerations are given to determine the required resolution.

  14. Optimal nonlinear filtering for track-before-detect in IR image sequences

    NASA Astrophysics Data System (ADS)

    Rozovskii, Boris L.; Petrov, Anton

    1999-10-01

    The 3D matched filter proposed by Reed et al. and its generalizations provide a powerful processing technique for detecting moving low observable targets. This technique is a centerpiece of various track-before-detect (TBD) systems. However, the 3D matched filter was designed for constant velocity targets and its applicability to more complicated patterns of target dynamics is not obvious. In this paper the 3D matched filter and BAVF are extended to the case of switching multiple models of target dynamics. We demonstrate that the 3D matched filtering can be cast into a general framework of optimal spatio-temporal nonlinear filtering for hidden Markov models. A robust and computationally efficient Bayesian algorithm for detection and tracking of low observable agile targets in IR Search and Track (IRST) systems is presented. The proposed algorithm is fully sequential. It facilitates optimal fusion of sensor measurements and prior information regarding possible threats. The algorithm is implemented as a TBD subsystem for IRST, however the general methodology is equally applicable for other imaging sensors.

  15. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  16. Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1994-01-01

    Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

  17. Design Optimization of Vena Cava Filters: An application to dual filtration devices

    SciTech Connect

    Singer, M A; Wang, S L; Diachin, D P

    2009-12-03

    Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.

  18. Optimal band selection in hyperspectral remote sensing of aquatic benthic features: a wavelet filter window approach

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.

    2006-09-01

    This paper describes a wavelet based approach to derivative spectroscopy. The approach is utilized to select, through optimization, optimal channels or bands to use as derivative based remote sensing algorithms. The approach is applied to airborne and modeled or synthetic reflectance signatures of environmental media and features or objects within such media, such as benthic submerged vegetation canopies. The technique can also applied to selected pixels identified within a hyperspectral image cube obtained from an board an airborne, ground based, or subsurface mobile imaging system. This wavelet based image processing technique is an extremely fast numerical method to conduct higher order derivative spectroscopy which includes nonlinear filter windows. Essentially, the wavelet filter scans a measured or synthetic signature in an automated sequential manner in order to develop a library of filtered spectra. The library is utilized in real time to select the optimal channels for direct algorithm application. The unique wavelet based derivative filtering technique makes us of a translating, and dilating derivative spectroscopy signal processing (TDDS-SP (R)) approach based upon remote sensing science and radiative transfer processes unlike other signal processing techniques applied to hyperspectral signatures.

  19. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  20. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  1. Optimal design of 2D digital filters based on neural networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-hua; He, Yi-gang; Zheng, Zhe-zhao; Zhang, Xu-hong

    2005-02-01

    Two-dimensional (2-D) digital filters are widely useful in image processing and other 2-D digital signal processing fields,but designing 2-D filters is much more difficult than designing one-dimensional (1-D) ones.In this paper, a new design approach for designing linear-phase 2-D digital filters is described,which is based on a new neural networks algorithm (NNA).By using the symmetry of the given 2-D magnitude specification,a compact express for the magnitude response of a linear-phase 2-D finite impulse response (FIR) filter is derived.Consequently,the optimal problem of designing linear-phase 2-D FIR digital filters is turned to approximate the desired 2-D magnitude response by using the compact express.To solve the problem,a new NNA is presented based on minimizing the mean-squared error,and the convergence theorem is presented and proved to ensure the designed 2-D filter stable.Three design examples are also given to illustrate the effectiveness of the NNA-based design approach.

  2. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method.

    PubMed

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-12-01

    In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8-2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75-150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed.

  3. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method.

    PubMed

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-12-01

    In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8-2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75-150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  4. Fishing for drifts: detecting buoyancy changes of a top marine predator using a step-wise filtering method

    PubMed Central

    Gordine, Samantha Alex; Fedak, Michael; Boehme, Lars

    2015-01-01

    ABSTRACT In southern elephant seals (Mirounga leonina), fasting- and foraging-related fluctuations in body composition are reflected by buoyancy changes. Such buoyancy changes can be monitored by measuring changes in the rate at which a seal drifts passively through the water column, i.e. when all active swimming motion ceases. Here, we present an improved knowledge-based method for detecting buoyancy changes from compressed and abstracted dive profiles received through telemetry. By step-wise filtering of the dive data, the developed algorithm identifies fragments of dives that correspond to times when animals drift. In the dive records of 11 southern elephant seals from South Georgia, this filtering method identified 0.8–2.2% of all dives as drift dives, indicating large individual variation in drift diving behaviour. The obtained drift rate time series exhibit that, at the beginning of each migration, all individuals were strongly negatively buoyant. Over the following 75–150 days, the buoyancy of all individuals peaked close to or at neutral buoyancy, indicative of a seal's foraging success. Independent verification with visually inspected detailed high-resolution dive data confirmed that this method is capable of reliably detecting buoyancy changes in the dive records of drift diving species using abstracted data. This also affirms that abstracted dive profiles convey the geometric shape of drift dives in sufficient detail for them to be identified. Further, it suggests that, using this step-wise filtering method, buoyancy changes could be detected even in old datasets with compressed dive information, for which conventional drift dive classification previously failed. PMID:26486362

  5. Hair enhancement in dermoscopic images using dual-channel quaternion tubularness filters and MRF-based multilabel optimization.

    PubMed

    Mirzaalian, Hengameh; Lee, Tim K; Hamarneh, Ghassan

    2014-12-01

    Hair occlusion is one of the main challenges facing automatic lesion segmentation and feature extraction for skin cancer applications. We propose a novel method for simultaneously enhancing both light and dark hairs with variable widths, from dermoscopic images, without the prior knowledge of the hair color. We measure hair tubularness using a quaternion color curvature filter. We extract optimal hair features (tubularness, scale, and orientation) using Markov random field theory and multilabel optimization. We also develop a novel dual-channel matched filter to enhance hair pixels in the dermoscopic images while suppressing irrelevant skin pixels. We evaluate the hair enhancement capabilities of our method on hair-occluded images generated via our new hair simulation algorithm. Since hair enhancement is an intermediate step in a computer-aided diagnosis system for analyzing dermoscopic images, we validate our method and compare it to other methods by studying its effect on: 1) hair segmentation accuracy; 2) image inpainting quality; and 3) image classification accuracy. The validation results on 40 real clinical dermoscopic images and 94 synthetic data demonstrate that our approach outperforms competing hair enhancement methods. PMID:25312927

  6. Global localization of 3D anatomical structures by pre-filtered Hough forests and discrete optimization.

    PubMed

    Donner, René; Menze, Bjoern H; Bischof, Horst; Langs, Georg

    2013-12-01

    The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates' weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450

  7. Global localization of 3D anatomical structures by pre-filtered Hough Forests and discrete optimization

    PubMed Central

    Donner, René; Menze, Bjoern H.; Bischof, Horst; Langs, Georg

    2013-01-01

    The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates’ weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450

  8. Two-step fringe pattern analysis with a Gabor filter bank

    NASA Astrophysics Data System (ADS)

    Rivera, Mariano; Dalmau, Oscar; Gonzalez, Adonai; Hernandez-Lopez, Francisco

    2016-10-01

    We propose a two-shot fringe analysis method for Fringe Patterns (FPs) with random phase-shift and changes in illumination components. These conditions reduce the acquisition time and simplify the experimental setup. Our method builds upon a Gabor Filter (GF) bank that eliminates noise and estimates the phase from the FPs. The GF bank allows us to obtain two phase maps with a sign ambiguity between them. Due to the fact that the random sign map is common to both computed phases, we can correct the sign ambiguity. We estimate a local phase-shift from the absolute wrapped residual between the estimated phases. Next, we robustly compute the global phase-shift. In order to unwrap the phase, we propose a robust procedure that interpolates unreliable phase regions obtained after applying the GF bank. We present numerical experiments that demonstrate the performance of our method.

  9. Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft

    NASA Astrophysics Data System (ADS)

    Rasotto, M.; Armellin, R.; Di Lizia, P.

    2016-03-01

    An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.

  10. Insights into HER2 signaling from step-by-step optimization of anti-HER2 antibodies

    PubMed Central

    Fu, Wenyan; Wang, Yuxiao; Zhang, Yunshan; Xiong, Lijuan; Takeda, Hiroaki; Ding, Li; Xu, Qunfang; He, Lidong; Tan, Wenlong; Bethune, Augus N.; Zhou, Lijun

    2014-01-01

    HER2, a ligand-free tyrosine kinase receptor of the HER family, is frequently overexpressed in breast cancer. The anti-HER2 antibody trastuzumab has shown significant clinical benefits in metastatic breast cancer; however, resistance to trastuzumab is common. The development of monoclonal antibodies that have complementary mechanisms of action results in a more comprehensive blockade of ErbB2 signaling, especially HER2/HER3 signaling. Use of such antibodies may have clinical benefits if these antibodies can become widely accepted. Here, we describe a novel anti-HER2 antibody, hHERmAb-F0178C1, which was isolated from a screen of a phage display library. A step-by-step optimization method was employed to maximize the inhibitory effect of this anti-HER2 antibody. Crystallographic analysis was used to determine the three-dimensional structure to 3.5 Å resolution, confirming that the epitope of this antibody is in domain III of HER2. Moreover, this novel anti-HER2 antibody exhibits superior efficacy in blocking HER2/HER3 heterodimerization and signaling, and its use in combination with pertuzumab has a synergistic effect. Characterization of this antibody revealed the important role of a ligand binding site within domain III of HER2. The results of this study clearly indicate the unique potential of hHERmAb-F0178C1, and its complementary inhibition effect on HER2/HER3 signaling warrants its consideration as a promising clinical treatment. PMID:24838231

  11. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  12. Dose optimization of breast balloon brachytherapy using a stepping 192Ir HDR source.

    PubMed

    Choi, Chang Hyun; Ye, Sung-Joon; Parsai, E Ishmael; Shen, Sui; Meredith, Ruby; Brezovich, Ivan A; Ove, Roger

    2009-02-03

    There is a considerable underdosage (11%-13%) of PTV due to anisotropy of a stationary source in breast balloon brachytherapy. We improved the PTV coverage by varying multiple dwell positions and weights. We assumed that the diameter of spherical balloons varied from 4.0 cm to 5.0 cm, that the PTV was a 1-cm thick spherical shell over the balloon (reduced by the small portion occupied by the catheter path), and that the number of dwell positions varied from 2 to 13 with 0.25-cm steps, oriented symmetrically with respect to the balloon center. By assuming that the perfect PTV coverage can be achieved by spherical dose distributions from an isotropic source, we developed an optimization program to minimize two objective functions defined as: (1) the number of PTV-voxels having more than 10% difference between optimized doses and spherical doses, and (2) the difference between optimized doses and spherical doses per PTV-voxel. The optimal PTV coverage occurred when applying 8-11 dwell positions with weights determined by the optimization scheme. Since the optimization yields ellipsoidal isodose distributions along the catheter, there is relative skin sparing for cases with source movement approximately tangent to the skin. We also verified the optimization in CT-based treatment planning systems. Our volumetric dose optimization for PTV coverage showed close agreement to linear or multiple-points optimization results from the literature. The optimization scheme provides a simple and practical solution applicable to the clinic.

  13. Optimization of filtering criterion for SEQUEST database searching to improve proteome coverage in shotgun proteomics

    PubMed Central

    Jiang, Xinning; Jiang, Xiaogang; Han, Guanghui; Ye, Mingliang; Zou, Hanfa

    2007-01-01

    Background In proteomic analysis, MS/MS spectra acquired by mass spectrometer are assigned to peptides by database searching algorithms such as SEQUEST. The assignations of peptides to MS/MS spectra by SEQUEST searching algorithm are defined by several scores including Xcorr, ΔCn, Sp, Rsp, matched ion count and so on. Filtering criterion using several above scores is used to isolate correct identifications from random assignments. However, the filtering criterion was not favorably optimized up to now. Results In this study, we implemented a machine learning approach known as predictive genetic algorithm (GA) for the optimization of filtering criteria to maximize the number of identified peptides at fixed false-discovery rate (FDR) for SEQUEST database searching. As the FDR was directly determined by decoy database search scheme, the GA based optimization approach did not require any pre-knowledge on the characteristics of the data set, which represented significant advantages over statistical approaches such as PeptideProphet. Compared with PeptideProphet, the GA based approach can achieve similar performance in distinguishing true from false assignment with only 1/10 of the processing time. Moreover, the GA based approach can be easily extended to process other database search results as it did not rely on any assumption on the data. Conclusion Our results indicated that filtering criteria should be optimized individually for different samples. The new developed software using GA provides a convenient and fast way to create tailored optimal criteria for different proteome samples to improve proteome coverage. PMID:17761002

  14. Design and optimization of a harmonic probe with step cross section in multifrequency atomic force microscopy

    SciTech Connect

    Cai, Jiandong; Zhang, Li; Wang, Michael Yu

    2015-12-15

    In multifrequency atomic force microscopy (AFM), probe’s characteristic of assigning resonance frequencies to integer harmonics results in a remarkable improvement of detection sensitivity at specific harmonic components. The selection criterion of harmonic order is based on its amplitude’s sensitivity on material properties, e.g., elasticity. Previous studies on designing harmonic probe are unable to provide a large design capability along with maintaining the structural integrity. Herein, we propose a harmonic probe with step cross section, in which it has variable width in top and bottom steps, while the middle step in cross section is kept constant. Higher order resonance frequencies are tailored to be integer times of fundamental resonance frequency. The probe design is implemented within a structural optimization framework. The optimally designed probe is micromachined using focused ion beam milling technique, and then measured with an AFM. The measurement results agree well with our resonance frequency assignment requirement.

  15. Design and optimization of a harmonic probe with step cross section in multifrequency atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Cai, Jiandong; Wang, Michael Yu; Zhang, Li

    2015-12-01

    In multifrequency atomic force microscopy (AFM), probe's characteristic of assigning resonance frequencies to integer harmonics results in a remarkable improvement of detection sensitivity at specific harmonic components. The selection criterion of harmonic order is based on its amplitude's sensitivity on material properties, e.g., elasticity. Previous studies on designing harmonic probe are unable to provide a large design capability along with maintaining the structural integrity. Herein, we propose a harmonic probe with step cross section, in which it has variable width in top and bottom steps, while the middle step in cross section is kept constant. Higher order resonance frequencies are tailored to be integer times of fundamental resonance frequency. The probe design is implemented within a structural optimization framework. The optimally designed probe is micromachined using focused ion beam milling technique, and then measured with an AFM. The measurement results agree well with our resonance frequency assignment requirement.

  16. AFM tip characterization by using FFT filtered images of step structures.

    PubMed

    Yan, Yongda; Xue, Bo; Hu, Zhenjiang; Zhao, Xuesen

    2016-01-01

    The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods. PMID:26517548

  17. Two-step optimization of pressure and recovery of reverse osmosis desalination process.

    PubMed

    Liang, Shuang; Liu, Cui; Song, Lianfa

    2009-05-01

    Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed.

  18. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants.

    PubMed

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R; Sresty, N V Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-07-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants.

  19. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  20. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  1. Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1992-01-01

    Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

  2. Design and optimization of stepped austempered ductile iron using characterization techniques

    SciTech Connect

    Hernández-Rivera, J.L.; Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J.

    2013-09-15

    Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.

  3. Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Qiu, Ping

    2015-05-01

    Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration.

  4. Creation of an iOS and Android Mobile Application for Inferior Vena Cava (IVC) Filters: A Powerful Tool to Optimize Care of Patients with IVC Filters.

    PubMed

    Deso, Steven E; Idakoji, Ibrahim A; Muelly, Michael C; Kuo, William T

    2016-06-01

    Owing to a myriad of inferior vena cava (IVC) filter types and their potential complications, rapid and correct identification may be challenging when encountered on routine imaging. The authors aimed to develop an interactive mobile application that allows recognition of all IVC filters and related complications, to optimize the care of patients with indwelling IVC filters. The FDA Premarket Notification Database was queried from 1980 to 2014 to identify all IVC filter types in the United States. An electronic search was then performed on MEDLINE and the FDA MAUDE database to identify all reported complications associated with each device. High-resolution photos were taken of each filter type and corresponding computed tomographic and fluoroscopic images were obtained from an institutional review board-approved IVC filter registry. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code. The software was deployed using PhoneGap (Adobe, San Jose, CA), and the prototype was tested and refined. Twenty-three IVC filter types were identified for inclusion. Safety data from FDA MAUDE and 72 relevant peer-reviewed studies were acquired, and complication rates for each filter type were highlighted in the application. Digital photos, fluoroscopic images, and CT DICOM files were seamlessly incorporated. All data were succinctly organized electronically, and the software was successfully deployed into Android (Google, Mountain View, CA) and iOS (Apple, Cupertino, CA) platforms. A powerful electronic mobile application was successfully created to allow rapid identification of all IVC filter types and related complications. This application may be used to optimize the care of patients with IVC filters.

  5. Creation of an iOS and Android Mobile Application for Inferior Vena Cava (IVC) Filters: A Powerful Tool to Optimize Care of Patients with IVC Filters.

    PubMed

    Deso, Steven E; Idakoji, Ibrahim A; Muelly, Michael C; Kuo, William T

    2016-06-01

    Owing to a myriad of inferior vena cava (IVC) filter types and their potential complications, rapid and correct identification may be challenging when encountered on routine imaging. The authors aimed to develop an interactive mobile application that allows recognition of all IVC filters and related complications, to optimize the care of patients with indwelling IVC filters. The FDA Premarket Notification Database was queried from 1980 to 2014 to identify all IVC filter types in the United States. An electronic search was then performed on MEDLINE and the FDA MAUDE database to identify all reported complications associated with each device. High-resolution photos were taken of each filter type and corresponding computed tomographic and fluoroscopic images were obtained from an institutional review board-approved IVC filter registry. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code. The software was deployed using PhoneGap (Adobe, San Jose, CA), and the prototype was tested and refined. Twenty-three IVC filter types were identified for inclusion. Safety data from FDA MAUDE and 72 relevant peer-reviewed studies were acquired, and complication rates for each filter type were highlighted in the application. Digital photos, fluoroscopic images, and CT DICOM files were seamlessly incorporated. All data were succinctly organized electronically, and the software was successfully deployed into Android (Google, Mountain View, CA) and iOS (Apple, Cupertino, CA) platforms. A powerful electronic mobile application was successfully created to allow rapid identification of all IVC filter types and related complications. This application may be used to optimize the care of patients with IVC filters. PMID:27247483

  6. Implicit application of polynomial filters in a k-step Arnoldi method

    NASA Technical Reports Server (NTRS)

    Sorensen, D. C.

    1990-01-01

    The Arnoldi process is a well known technique for approximating a few eigenvalues and corresponding eigenvectors of a general square matrix. Numerical difficulties such as loss of orthogonality and assessment of the numerical quality of the approximations as well as a potential for unbounded growth in storage have limited the applicability of the method. These issues are addressed by fixing the number of steps in the Arnoldi process at a prescribed value k and then treating the residual vector as a function of the initial Arnoldi vector. This starting vector is then updated through an iterative scheme that is designed to force convergence of the residual to zero. The iterative scheme is shown to be a truncation of the standard implicitly shifted QR-iteration for dense problems and it avoids the need to explicitly restart the Arnoldi sequence. The main emphasis of this paper is on the derivation and analysis of this scheme. However, there are obvious ways to exploit parallelism through the matrix-vector operations that comprise the majority of the work in the algorithm. Preliminary computational results are given for a few problems on some parallel and vector computers.

  7. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647

  8. Optimally recovering rate variation information from genomes and sequences: pattern filtering.

    PubMed

    Lake, J A

    1998-09-01

    Nucleotide substitution rates vary at different positions within genes and genomes, but rates are difficult to estimate, because they are masked by the stochastic nature of substitutions. In this paper, a linear method, pattern filtering, is described which can optimally separate the signals (related to substitution rates or to other measures of sequence change) from stochastic noise. Pattern filtering promises to be useful in both genomic and molecular evolution studies. In an example using mitochondrial genomes, it is shown that pattern filtering can reveal coding and non-coding regions without the need for prior identification of reading frames or other knowledge of the sequence and promises to be an important tool for genomic analysis. In a second example, it is shown that pattern filtering allows one to classify sites on the basis of an estimator of substitution rates. Using elongation factor EF-1 alpha sequences, it is shown that the fastest sites favor archaea as the sister taxon of eukaryotes, whereas the slower sites support the eocyte prokaryotes as the sister taxon of eukaryotes, suggesting that the former result is an artifact of "long branch attraction." PMID:9729887

  9. Optimization of single-step tapering amplitude and energy detuning for high-gain FELs

    NASA Astrophysics Data System (ADS)

    Li, He-Ting; Jia, Qi-Ka

    2015-01-01

    We put forward a method to optimize the single-step tapering amplitude of undulator strength and initial energy tuning of electron beam to maximize the saturation power of high gain free-electron lasers (FELs), based on the physics of longitudinal electron beam phase space. Using the FEL simulation code GENESIS, we numerically demonstrate the accuracy of the estimations for parameters corresponding to the linac coherent light source and the Tesla test facility.

  10. Novel tools for stepping source brachytherapy treatment planning: Enhanced geometrical optimization and interactive inverse planning

    SciTech Connect

    Dinkla, Anna M. Laarse, Rob van der; Koedooder, Kees; Petra Kok, H.; Wieringen, Niek van; Pieters, Bradley R.; Bel, Arjan

    2015-01-15

    Purpose: Dose optimization for stepping source brachytherapy can nowadays be performed using automated inverse algorithms. Although much quicker than graphical optimization, an experienced treatment planner is required for both methods. With automated inverse algorithms, the procedure to achieve the desired dose distribution is often based on trial-and-error. Methods: A new approach for stepping source prostate brachytherapy treatment planning was developed as a quick and user-friendly alternative. This approach consists of the combined use of two novel tools: Enhanced geometrical optimization (EGO) and interactive inverse planning (IIP). EGO is an extended version of the common geometrical optimization method and is applied to create a dose distribution as homogeneous as possible. With the second tool, IIP, this dose distribution is tailored to a specific patient anatomy by interactively changing the highest and lowest dose on the contours. Results: The combined use of EGO–IIP was evaluated on 24 prostate cancer patients, by having an inexperienced user create treatment plans, compliant to clinical dose objectives. This user was able to create dose plans of 24 patients in an average time of 4.4 min/patient. An experienced treatment planner without extensive training in EGO–IIP also created 24 plans. The resulting dose-volume histogram parameters were comparable to the clinical plans and showed high conformance to clinical standards. Conclusions: Even for an inexperienced user, treatment planning with EGO–IIP for stepping source prostate brachytherapy is feasible as an alternative to current optimization algorithms, offering speed, simplicity for the user, and local control of the dose levels.

  11. Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario

    NASA Astrophysics Data System (ADS)

    Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.

    2009-12-01

    Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.

  12. Spatial join optimization among WFSs based on recursive partitioning and filtering rate estimation

    NASA Astrophysics Data System (ADS)

    Lan, Guiwen; Wu, Congcong; Shi, Guangyi; Chen, Qi; Yang, Zhao

    2015-12-01

    Spatial join among Web Feature Services (WFS) is time-consuming for most of non-candidate spatial objects may be encoded by GML and transferred to client side. In this paper, an optimization strategy is proposed to enhance performance of these joins by filtering non-candidate spatial objects as many as possible. By recursive partitioning, the data skew of sub-areas is facilitated to reduce data transmission using spatial semi-join. Moreover filtering rate is used to determine whether a spatial semi-join for a sub-area is profitable and choose a suitable execution plan for it. The experimental results show that the proposed strategy is feasible under most circumstances.

  13. Transdermal film-loaded finasteride microplates to enhance drug skin permeation: Two-step optimization study.

    PubMed

    Ahmed, Tarek A; El-Say, Khalid M

    2016-06-10

    The goal was to develop an optimized transdermal finasteride (FNS) film loaded with drug microplates (MIC), utilizing two-step optimization, to decrease the dosing schedule and inconsistency in gastrointestinal absorption. First; 3-level factorial design was implemented to prepare optimized FNS-MIC of minimum particle size. Second; Box-Behnken design matrix was used to develop optimized transdermal FNS-MIC film. Interaction among MIC components was studied using physicochemical characterization tools. Film components namely; hydroxypropyl methyl cellulose (X1), dimethyl sulfoxide (X2) and propylene glycol (X3) were optimized for their effects on the film thickness (Y1) and elongation percent (Y2), and for FNS steady state flux (Y3), permeability coefficient (Y4), and diffusion coefficient (Y5) following ex-vivo permeation through the rat skin. Morphological study of the optimized MIC and transdermal film was also investigated. Results revealed that stabilizer concentration and anti-solvent percent were significantly affecting MIC formulation. Optimized FNS-MIC of particle size 0.93μm was successfully prepared in which there was no interaction observed among their components. An enhancement in the aqueous solubility of FNS-MIC by more than 23% was achieved. All the studied variables, most of their interaction and quadratic effects were significantly affecting the studied variables (Y1-Y5). Morphological observation illustrated non-spherical, short rods, flakes like small plates that were homogeneously distributed in the optimized transdermal film. Ex-vivo study showed enhanced FNS permeation from film loaded MIC when compared to that contains pure drug. So, MIC is a successful technique to enhance aqueous solubility and skin permeation of poor water soluble drug especially when loaded into transdermal films.

  14. Optimized particle-mesh Ewald/multiple-time step integration for molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Batcho, Paul F.; Case, David A.; Schlick, Tamar

    2001-09-01

    We develop an efficient multiple time step (MTS) force splitting scheme for biological applications in the AMBER program in the context of the particle-mesh Ewald (PME) algorithm. Our method applies a symmetric Trotter factorization of the Liouville operator based on the position-Verlet scheme to Newtonian and Langevin dynamics. Following a brief review of the MTS and PME algorithms, we discuss performance speedup and the force balancing involved to maximize accuracy, maintain long-time stability, and accelerate computational times. Compared to prior MTS efforts in the context of the AMBER program, advances are possible by optimizing PME parameters for MTS applications and by using the position-Verlet, rather than velocity-Verlet, scheme for the inner loop. Moreover, ideas from the Langevin/MTS algorithm LN are applied to Newtonian formulations here. The algorithm's performance is optimized and tested on water, solvated DNA, and solvated protein systems. We find CPU speedup ratios of over 3 for Newtonian formulations when compared to a 1 fs single-step Verlet algorithm using outer time steps of 6 fs in a three-class splitting scheme; accurate conservation of energies is demonstrated over simulations of length several hundred ps. With modest Langevin forces, we obtain stable trajectories for outer time steps up to 12 fs and corresponding speedup ratios approaching 5. We end by suggesting that modified Ewald formulations, using tailored alternatives to the Gaussian screening functions for the Coulombic terms, may allow larger time steps and thus further speedups for both Newtonian and Langevin protocols; such developments are reported separately.

  15. Optimized model of oriented-line-target detection using vertical and horizontal filters

    NASA Astrophysics Data System (ADS)

    Westland, Stephen; Foster, David H.

    1995-08-01

    A line-element target differing sufficiently in orientation from a background of line elements can be visually detected easily and quickly; orientation thresholds for such detection are lowest when the background elements are all vertical or all horizontal. A simple quantitative model of this performance was constructed from two classes of anisotropic filters, (2) nonlinear point transformation, and (3) estimation of a signal-to-noise ratio based on responses to images with and without a target. A Monte Carlo optimization procedure (simulated annealing) was used to determine the model parameter values required for providing an accurate description of psychophysical data on orientation increment thresholds.

  16. Facile, green and clean one-step synthesis of carbon dots from wool: Application as a sensor for glyphosate detection based on the inner filter effect.

    PubMed

    Wang, Long; Bi, Yidan; Hou, Juan; Li, Huiyu; Xu, Yuan; Wang, Bo; Ding, Hong; Ding, Lan

    2016-11-01

    In this work, we reported a green route for the fabrication of fluorescent carbon dots (CDs). Wool, a kind of nontoxic and natural raw material, was chosen as the precursor to prepare CDs via a one-step microwave-assisted pyrolysis process. Compared with previously reported methods for preparation of CDs based on biomass materials, this method was simple, facile and free of any additives, such as acids, bases, or salts, which avoid the complicated post-treatment process to purify the CDs. The CDs have a high quantum yield (16.3%) and their fluorescence could be quenched by silver nanoparticles (AgNPs) based on inner filter effect (IFE). The presence of glyphosate could induce the aggregation of AgNPs and thus result in the fluorescence recovery of the quenched CDs. Based on this phenomenon, we constructed a fluorescence system (CDs/AgNPs) for determination of glyphosate. Under the optimized conditions, the fluorescence intensity of the CDs/AgNPs system was proportional to the concentration of glyphosate in the range of 0.025-2.5μgmL(-1), with a detection limit of 12ngmL(-1). Furthermore, the established method has been successfully used for glyphosate detection in the cereal samples with satisfactory results. PMID:27591613

  17. Facile, green and clean one-step synthesis of carbon dots from wool: Application as a sensor for glyphosate detection based on the inner filter effect.

    PubMed

    Wang, Long; Bi, Yidan; Hou, Juan; Li, Huiyu; Xu, Yuan; Wang, Bo; Ding, Hong; Ding, Lan

    2016-11-01

    In this work, we reported a green route for the fabrication of fluorescent carbon dots (CDs). Wool, a kind of nontoxic and natural raw material, was chosen as the precursor to prepare CDs via a one-step microwave-assisted pyrolysis process. Compared with previously reported methods for preparation of CDs based on biomass materials, this method was simple, facile and free of any additives, such as acids, bases, or salts, which avoid the complicated post-treatment process to purify the CDs. The CDs have a high quantum yield (16.3%) and their fluorescence could be quenched by silver nanoparticles (AgNPs) based on inner filter effect (IFE). The presence of glyphosate could induce the aggregation of AgNPs and thus result in the fluorescence recovery of the quenched CDs. Based on this phenomenon, we constructed a fluorescence system (CDs/AgNPs) for determination of glyphosate. Under the optimized conditions, the fluorescence intensity of the CDs/AgNPs system was proportional to the concentration of glyphosate in the range of 0.025-2.5μgmL(-1), with a detection limit of 12ngmL(-1). Furthermore, the established method has been successfully used for glyphosate detection in the cereal samples with satisfactory results.

  18. Modified patch-based locally optimal Wiener method for interferometric SAR phase filtering

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing

    2016-04-01

    This paper presents a modified patch-based locally optimal Wiener (PLOW) method for interferometric synthetic aperture radar (InSAR) phase filtering. PLOW is a linear minimum mean squared error (LMMSE) estimator based on a Gaussian additive noise condition. It jointly estimates moments, including mean and covariance, using a non-local technique. By using similarities between image patches, this method can effectively filter noise while preserving details. When applied to InSAR phase filtering, three modifications are proposed based on spatial variant noise. First, pixels are adaptively clustered according to their coherence magnitudes. Second, rather than a global estimator, a locally adaptive estimator is used to estimate noise covariance. Third, using the coherence magnitudes as weights, the mean of each cluster is estimated, using a weighted mean to further reduce noise. The performance of the proposed method is experimentally verified using simulated and real data. The results of our study demonstrate that the proposed method is on par or better than the non-local interferometric SAR (NL-InSAR) method.

  19. In vivo ultrasound biomicroscopy of skin: spectral system characteristics and inverse filtering optimization.

    PubMed

    Vogt, Michael; Ermert, Helmut

    2007-08-01

    High-frequency ultrasound (HFUS) in the 20 MHz to 100 MHz range has to meet the opposite requirements of good spatial resolution and of high penetration depth for in vivo ultrasound biomicroscopy (UBM) of skin. The attenuation of water, which serves as sound propagation medium between utilized single element transducers and the skin, becomes very eminent with increasing frequency. Furthermore, the spectra of acquired radio frequency (rf) echo signals change over depth because of the diffracted sound field characteristics. The reduction of the system's center frequency and bandwidth causes a significant loss of spatial resolution over depth. In this paper, the spectral characteristics of HFUS imaging systems and the potential of inverse echo signal filtering for the optimization of pulse-echo measurements is analyzed and validated. A Gaussian model of the system's transfer function, which takes into account the frequency-dependent attenuation of the water path, was developed. Predictions of system performance are derived from this model and compared with measurement results. The design of a HFUS skin imaging system with a 100 MHz range transducer and a broadband driving electronics is discussed. A time-variant filter for inverse rf echo signal filtering was designed to compensate the system's depth-dependent imaging properties. Results of in vivo measurements are shown and discussed. PMID:17703658

  20. Optimization of Signal Decomposition Matched Filtering (SDMF) for Improved Detection of Copy-Number Variations.

    PubMed

    Stamoulis, Catherine; Betensky, Rebecca A

    2016-01-01

    We aim to improve the performance of the previously proposed signal decomposition matched filtering (SDMF) method [26] for the detection of copy-number variations (CNV) in the human genome. Through simulations, we show that the modified SDMF is robust even at high noise levels and outperforms the original SDMF method, which indirectly depends on CNV frequency. Simulations are also used to develop a systematic approach for selecting relevant parameter thresholds in order to optimize sensitivity, specificity and computational efficiency. We apply the modified method to array CGH data from normal samples in the cancer genome atlas (TCGA) and compare detected CNVs to those estimated using circular binary segmentation (CBS) [19], a hidden Markov model (HMM)-based approach [11] and a subset of CNVs in the Database of Genomic Variants. We show that a substantial number of previously identified CNVs are detected by the optimized SDMF, which also outperforms the other two methods. PMID:27295643

  1. Development of a reliable alkaline wastewater treatment process: optimization of the pre-treatment step.

    PubMed

    Prisciandaro, M; Mazziotti di Celso, G; Vegliò, F

    2005-12-01

    Alkaline waters produced by caprolactam plants polymerizing the fibres of nylon-6 are characterized by a very high alkalinity, salinity and COD values, in addition to the presence of recalcitrant organic molecules. These characteristics make alkaline wastewaters very difficult to treat; so the development of the suitable sequence to carry out in a depuration process appears of great interest. The proposed general process consists of three main steps: first, pre-treatment for the acidification of the polluted stream, second, a successive extraction of the bio-recalcitrant compound (noted as cycloexanecarboxysulphonic acid (CECS)) and a final biological treatment. In particular, this paper deals with the pre-treatment step: it consists of an acidification process by means of sulphuric acid with the concomitant precipitation of black slurries in the presence of different substances, such as solvents, CaCl2, bentonite, several flocculants and coagulants. The aim of this study is to set an experimental procedure, which could minimize fouling problems during sludge filtration. The use of additives like bentonite seems to give the best results, because it allows good COD reductions and a filterable precipitate, which avoids excessive fouling problems of the experimental apparatus. PMID:16293280

  2. Towards Optimal Filtering on ARM for ATLAS Tile Calorimeter Front-End Processing

    NASA Astrophysics Data System (ADS)

    Cox, Mitchell A.

    2015-10-01

    The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. Advanced and characteristically expensive Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are currently used to process this quantity of data. It is proposed that a cost- effective, high data throughput Processing Unit (PU) can be developed by using several ARM System on Chips in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. ARM is a cost effective and energy efficient alternative CPU architecture to the long established x86 architecture. This PU could be used for a variety of high-level algorithms on the high data throughput raw data. An Optimal Filtering algorithm has been implemented in C++ and several ARM platforms have been tested. Optimal Filtering is currently used in the ATLAS Tile Calorimeter front-end for basic energy reconstruction and is currently implemented on DSPs.

  3. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    PubMed

    Suleimanov, Yury V; Green, William H

    2015-09-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes. PMID:26575920

  4. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    PubMed

    Suleimanov, Yury V; Green, William H

    2015-09-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.

  5. A simple procedure eliminating multiple optimization steps required in developing multiplex PCR reactions

    SciTech Connect

    Grondin, V.; Roskey, M.; Klinger, K.; Shuber, T.

    1994-09-01

    The PCR technique is one of the most powerful tools in modern molecular genetics and has achieved widespread use in the analysis of genetic diseases. Typically, a region of interest is amplified from genomic DNA or cDNA and examined by various methods of analysis for mutations or polymorphisms. In cases of small genes and transcripts, amplification of single, small regions of DNA are sufficient for analysis. However, when analyzing large genes and transcripts, multiple PCRs may be required to identify the specific mutation or polymorphism of interest. Ever since it has been shown that PCR could simultaneously amplify multiple loci in the human dystrophin gene, multiplex PCR has been established as a general technique. The properities of multiplex PCR make it a useful tool and preferable to simultaneous uniplex PCR in many instances. However, the steps for developing a multiplex PCR can be laborious, with significant difficulty in achieving equimolar amounts of several different amplicons. We have developed a simple method of primer design that has enabled us to eliminate a number of the standard optimization steps required in developing a multiplex PCR. Sequence-specific oligonucleotide pairs were synthesized for the simultaneous amplification of multiple exons within the CFTR gene. A common non-complementary 20 nucleotide sequence was attached to each primer, thus creating a mixture of primer pairs all containing a universal primer sequence. Multiplex PCR reactions were carried out containing target DNA, a mixture of several chimeric primer pairs and primers complementary to only the universal portion of the chimeric primers. Following optimization of conditions for the universal primer, limited optimization was needed for successful multiplex PCR. In contrast, significant optimization of the PCR conditions were needed when pairs of sequence specific primers were used together without the universal sequence.

  6. Statistical efficiency and optimal design for stepped cluster studies under linear mixed effects models.

    PubMed

    Girling, Alan J; Hemming, Karla

    2016-06-15

    In stepped cluster designs the intervention is introduced into some (or all) clusters at different times and persists until the end of the study. Instances include traditional parallel cluster designs and the more recent stepped-wedge designs. We consider the precision offered by such designs under mixed-effects models with fixed time and random subject and cluster effects (including interactions with time), and explore the optimal choice of uptake times. The results apply both to cross-sectional studies where new subjects are observed at each time-point, and longitudinal studies with repeat observations on the same subjects. The efficiency of the design is expressed in terms of a 'cluster-mean correlation' which carries information about the dependency-structure of the data, and two design coefficients which reflect the pattern of uptake-times. In cross-sectional studies the cluster-mean correlation combines information about the cluster-size and the intra-cluster correlation coefficient. A formula is given for the 'design effect' in both cross-sectional and longitudinal studies. An algorithm for optimising the choice of uptake times is described and specific results obtained for the best balanced stepped designs. In large studies we show that the best design is a hybrid mixture of parallel and stepped-wedge components, with the proportion of stepped wedge clusters equal to the cluster-mean correlation. The impact of prior uncertainty in the cluster-mean correlation is considered by simulation. Some specific hybrid designs are proposed for consideration when the cluster-mean correlation cannot be reliably estimated, using a minimax principle to ensure acceptable performance across the whole range of unknown values. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:26748662

  7. Ultra-Compact Broadband High-Spurious Suppression Bandpass Filter Using Double Split-end Stepped Impedance Resonators

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop; Wollack, Ed; Papapolymerou, John; Laskar, Joy

    2005-01-01

    We propose an ultra compact single-layer spurious suppression band pass filter design which has the following benefit: 1) Effective coupling area can be increased with no fabrication limitation and no effect on the spurious response; 2) Two fundamental poles are introduced to suppress spurs; 3) Filter can be designed with up to 30% bandwidth; 4) The Filter length is reduced by at least 100% when compared to the conventional filter; 5) Spurious modes are suppressed up to at the seven times the fundamental frequency; and 6) It uses only one layer of metallization which minimize the fabrication cost.

  8. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  9. Graphics-processor-unit-based parallelization of optimized baseline wander filtering algorithms for long-term electrocardiography.

    PubMed

    Niederhauser, Thomas; Wyss-Balmer, Thomas; Haeberlin, Andreas; Marisa, Thanks; Wildhaber, Reto A; Goette, Josef; Jacomet, Marcel; Vogel, Rolf

    2015-06-01

    Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here, we present a graphics processor unit (GPU)-based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to autoregressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and four times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a seven-day high-resolution ECG is computed within less than 3 s. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced.

  10. Graphics-processor-unit-based parallelization of optimized baseline wander filtering algorithms for long-term electrocardiography.

    PubMed

    Niederhauser, Thomas; Wyss-Balmer, Thomas; Haeberlin, Andreas; Marisa, Thanks; Wildhaber, Reto A; Goette, Josef; Jacomet, Marcel; Vogel, Rolf

    2015-06-01

    Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here, we present a graphics processor unit (GPU)-based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to autoregressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and four times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a seven-day high-resolution ECG is computed within less than 3 s. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced. PMID:25675449

  11. Optimized design of high-order series coupler Yb3+/Er3+ codoped phosphate glass microring resonator filters

    NASA Astrophysics Data System (ADS)

    Galatus, Ramona; Valles, Juan

    2016-04-01

    The optimized geometry based on high-order active microring resonators (MRR) geometry is proposed. The solution possesses both the filtering and amplifying functions for the signal at around 1534nm (pump 976 nm). The cross-grid resonator with laterally, series-coupled triple-microrings, having 15.35μm radius, in a co-propagation topology between signal and pump, is the structure under analysis (commonly termed an add-drop filter).

  12. Optimization of a blanching step to maximize sulforaphane synthesis in broccoli florets.

    PubMed

    Pérez, Carmen; Barrientos, Herna; Román, Juan; Mahn, Andrea

    2014-02-15

    A blanching step was designed to favor sulforaphane synthesis in broccoli. Blanching was optimised through a central composite design, and the effects of temperature (50-70 °C) and immersion time in water (5-15 min) on the content of total glucosinolates, glucoraphanin, sulforaphane, and myrosinase activity were determined. Results were analysed by ANOVA and the optimal condition was determined through response surface methodology. Temperature between 50 and 60 °C significantly increased sulforaphane content (p<0.05), whilst blanching at 70 and 74 °C diminished significantly this content, compared to fresh broccoli. The optimal blanching conditions given by the statistical model were immersion in water at 57 °C for 13 min; coinciding with the minimum glucosinolates and glucoraphanin content, and with the maximum myrosinase activity. In the optimal conditions, the predicted response of 4.0 μmol sulforaphane/g dry matter was confirmed experimentally. This value represents a 237% increase with respect to the fresh vegetable.

  13. Parameter optimization for image denoising based on block matching and 3D collaborative filtering

    NASA Astrophysics Data System (ADS)

    Pedada, Ramu; Kugu, Emin; Li, Jiang; Yue, Zhanfeng; Shen, Yuzhong

    2009-02-01

    Clinical MRI images are generally corrupted by random noise during acquisition with blurred subtle structure features. Many denoising methods have been proposed to remove noise from corrupted images at the expense of distorted structure features. Therefore, there is always compromise between removing noise and preserving structure information for denoising methods. For a specific denoising method, it is crucial to tune it so that the best tradeoff can be obtained. In this paper, we define several cost functions to assess the quality of noise removal and that of structure information preserved in the denoised image. Strength Pareto Evolutionary Algorithm 2 (SPEA2) is utilized to simultaneously optimize the cost functions by modifying parameters associated with the denoising methods. The effectiveness of the algorithm is demonstrated by applying the proposed optimization procedure to enhance the image denoising results using block matching and 3D collaborative filtering. Experimental results show that the proposed optimization algorithm can significantly improve the performance of image denoising methods in terms of noise removal and structure information preservation.

  14. Reliably Detecting Clinically Important Variants Requires Both Combined Variant Calls and Optimized Filtering Strategies

    PubMed Central

    Field, Matthew A.; Cho, Vicky

    2015-01-01

    A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality ‘genome in a bottle’ reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436

  15. Optimization of hydrolysis and volatile fatty acids production from sugarcane filter cake: Effects of urea supplementation and sodium hydroxide pretreatment.

    PubMed

    Janke, Leandro; Leite, Athaydes; Batista, Karla; Weinrich, Sören; Sträuber, Heike; Nikolausz, Marcell; Nelles, Michael; Stinner, Walter

    2016-01-01

    Different methods for optimization the anaerobic digestion (AD) of sugarcane filter cake (FC) with a special focus on volatile fatty acids (VFA) production were studied. Sodium hydroxide (NaOH) pretreatment at different concentrations was investigated in batch experiments and the cumulative methane yields fitted to a dual-pool two-step model to provide an initial assessment on AD. The effects of nitrogen supplementation in form of urea and NaOH pretreatment for improved VFA production were evaluated in a semi-continuously operated reactor as well. The results indicated that higher NaOH concentrations during pretreatment accelerated the AD process and increased methane production in batch experiments. Nitrogen supplementation resulted in a VFA loss due to methane formation by buffering the pH value at nearly neutral conditions (∼ 6.7). However, the alkaline pretreatment with 6g NaOH/100g FCFM improved both the COD solubilization and the VFA yield by 37%, mainly consisted by n-butyric and acetic acids.

  16. Application of digital tomosynthesis (DTS) of optimal deblurring filters for dental X-ray imaging

    NASA Astrophysics Data System (ADS)

    Oh, J. E.; Cho, H. S.; Kim, D. S.; Choi, S. I.; Je, U. K.

    2012-04-01

    Digital tomosynthesis (DTS) is a limited-angle tomographic technique that provides some of the tomographic benefits of computed tomography (CT) but at reduced dose and cost. Thus, the potential for application of DTS to dental X-ray imaging seems promising. As a continuation of our dental radiography R&D, we developed an effective DTS reconstruction algorithm and implemented it in conjunction with a commercial dental CT system for potential use in dental implant placement. The reconstruction algorithm employed a backprojection filtering (BPF) method based upon optimal deblurring filters to suppress effectively both the blur artifacts originating from the out-focus planes and the high-frequency noise. To verify the usefulness of the reconstruction algorithm, we performed systematic simulation works and evaluated the image characteristics. We also performed experimental works in which DTS images of enhanced anatomical resolution were successfully obtained by using the algorithm and were promising to our ongoing applications to dental X-ray imaging. In this paper, our approach to the development of the DTS reconstruction algorithm and the results are described in detail.

  17. Optimizing the Advanced Ceramic Material (ACM) for Diesel Particulate Filter Applications

    SciTech Connect

    Dillon, Heather E.; Stewart, Mark L.; Maupin, Gary D.; Gallant, Thomas R.; Li, Cheng; Mao, Frank H.; Pyzik, Aleksander J.; Ramanathan, Ravi

    2006-10-02

    This paper describes the application of pore-scale filtration simulations to the ‘Advanced Ceramic Material’ (ACM) developed by Dow Automotive for use in advanced diesel particulate filters. The application required the generation of a three dimensional substrate geometry to provide the boundary conditions for the flow model. An innovative stochastic modeling technique was applied matching chord length distribution and the porosity profile of the material. Additional experimental validation was provided by the single channel experimental apparatus. Results show that the stochastic reconstruction techniques provide flexibility and appropriate accuracy for the modeling efforts. Early optimization efforts imply that needle length may provide a mechanism for adjusting performance of the ACM for DPF applications. New techniques have been developed to visualize soot deposition in both traditional and new DPF substrate materials. Loading experiments have been conducted on a variety of single channel DPF substrates to develop a deeper understanding of soot penetration, soot deposition characteristics, and to confirm modeling results.

  18. Theoretical optimal modulation frequencies for scattering parameter estimation and ballistic photon filtering in diffusing media.

    PubMed

    Panigrahi, Swapnesh; Fade, Julien; Ramachandran, Hema; Alouini, Mehdi

    2016-07-11

    The efficiency of using intensity modulated light for the estimation of scattering properties of a turbid medium and for ballistic photon discrimination is theoretically quantified in this article. Using the diffusion model for modulated photon transport and considering a noisy quadrature demodulation scheme, the minimum-variance bounds on estimation of parameters of interest are analytically derived and analyzed. The existence of a variance-minimizing optimal modulation frequency is shown and its evolution with the properties of the intervening medium is derived and studied. Furthermore, a metric is defined to quantify the efficiency of ballistic photon filtering which may be sought when imaging through turbid media. The analytical derivation of this metric shows that the minimum modulation frequency required to attain significant ballistic discrimination depends only on the reduced scattering coefficient of the medium in a linear fashion for a highly scattering medium.

  19. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  20. An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter

    NASA Astrophysics Data System (ADS)

    Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning

    2015-08-01

    An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/με.

  1. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: validation on phantoms.

    PubMed

    Bai, Mei; Chen, Jiuhong; Raupach, Rainer; Suess, Christoph; Tao, Ying; Peng, Mingchen

    2009-01-01

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P > 0.05), whereas noise was reduced (P < 0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P > 0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  2. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: Validation on phantoms

    SciTech Connect

    Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen

    2009-01-15

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  3. Optimal synthesis of double-phase computer generated holograms using a phase-only spatial light modulator with grating filter.

    PubMed

    Song, Hoon; Sung, Geeyoung; Choi, Sujin; Won, Kanghee; Lee, Hong-Seok; Kim, Hwi

    2012-12-31

    We propose an optical system for synthesizing double-phase complex computer-generated holograms using a phase-only spatial light modulator and a phase grating filter. Two separated areas of the phase-only spatial light modulator are optically superposed by 4-f configuration with an optimally designed grating filter to synthesize arbitrary complex optical field distributions. The tolerances related to misalignment factors are analyzed, and the optimal synthesis method of double-phase computer-generated holograms is described. PMID:23388811

  4. Optimizing Simplified One-Step Chemical Models for High Speed Reacting Flows

    NASA Astrophysics Data System (ADS)

    Ozgen, Alp; Houim, Ryan W.; Oran, Elaine S.

    2015-11-01

    One of the most important and difficult parts of constructing a multidimensional numerical simulation of a hydrocarbon reacting flow is finding a reliable and affordable model of the chemical and diffusive properties. Full detailed chemical models of these systems contain too many reactions and chemical species to be practical for realistic scenarios. The objective of our work is to create the simplest model possible that can reproduce the time-dependence of the energy input and the conversion from fuel to products. To that end, we are developing a procedure optimizing parameters in the most simplified ``one-step'' model. An important requirement of this model is that it reproduces known flame and detonation properties. Multidimensional numerical simulations using the new model are compared to deflagration-to-detonation experiments in channels containing ethylene and oxygen.

  5. Optimization of a one-step heat-inducible in vivo mini DNA vector production system.

    PubMed

    Nafissi, Nafiseh; Sum, Chi Hong; Wettig, Shawn; Slavcev, Roderick A

    2014-01-01

    While safer than their viral counterparts, conventional circular covalently closed (CCC) plasmid DNA vectors offer a limited safety profile. They often result in the transfer of unwanted prokaryotic sequences, antibiotic resistance genes, and bacterial origins of replication that may lead to unwanted immunostimulatory responses. Furthermore, such vectors may impart the potential for chromosomal integration, thus potentiating oncogenesis. Linear covalently closed (LCC), bacterial sequence free DNA vectors have shown promising clinical improvements in vitro and in vivo. However, the generation of such minivectors has been limited by in vitro enzymatic reactions hindering their downstream application in clinical trials. We previously characterized an in vivo temperature-inducible expression system, governed by the phage λ pL promoter and regulated by the thermolabile λ CI[Ts]857 repressor to produce recombinant protelomerase enzymes in E. coli. In this expression system, induction of recombinant protelomerase was achieved by increasing culture temperature above the 37°C threshold temperature. Overexpression of protelomerase led to enzymatic reactions, acting on genetically engineered multi-target sites called "Super Sequences" that serve to convert conventional CCC plasmid DNA into LCC DNA minivectors. Temperature up-shift, however, can result in intracellular stress responses and may alter plasmid replication rates; both of which may be detrimental to LCC minivector production. We sought to optimize our one-step in vivo DNA minivector production system under various induction schedules in combination with genetic modifications influencing plasmid replication, processing rates, and cellular heat stress responses. We assessed different culture growth techniques, growth media compositions, heat induction scheduling and temperature, induction duration, post-induction temperature, and E. coli genetic background to improve the productivity and scalability of our system

  6. Selecting the optimal anti-aliasing filter for multichannel biosignal acquisition intended for inter-signal phase shift analysis.

    PubMed

    Keresnyei, Róbert; Megyeri, Péter; Zidarics, Zoltán; Hejjel, László

    2015-01-01

    The availability of microcomputer-based portable devices facilitates the high-volume multichannel biosignal acquisition and the analysis of their instantaneous oscillations and inter-signal temporal correlations. These new, non-invasively obtained parameters can have considerable prognostic or diagnostic roles. The present study investigates the inherent signal delay of the obligatory anti-aliasing filters. One cycle of each of the 8 electrocardiogram (ECG) and 4 photoplethysmogram signals from healthy volunteers or artificially synthesised series were passed through 100-80-60-40-20 Hz 2-4-6-8th order Bessel and Butterworth filters digitally synthesized by bilinear transformation, that resulted in a negligible error in signal delay compared to the mathematical model of the impulse- and step responses of the filters. The investigated filters have as diverse a signal delay as 2-46 ms depending on the filter parameters and the signal slew rate, which is difficult to predict in biological systems and thus difficult to compensate for. Its magnitude can be comparable to the examined phase shifts, deteriorating the accuracy of the measurement. As a conclusion, identical or very similar anti-aliasing filters with lower orders and higher corner frequencies, oversampling, and digital low pass filtering are recommended for biosignal acquisition intended for inter-signal phase shift analysis. PMID:25514627

  7. Fast Automatic Step Size Estimation for Gradient Descent Optimization of Image Registration.

    PubMed

    Qiao, Yuchuan; van Lew, Baldur; Lelieveldt, Boudewijn P F; Staring, Marius

    2016-02-01

    Fast automatic image registration is an important prerequisite for image-guided clinical procedures. However, due to the large number of voxels in an image and the complexity of registration algorithms, this process is often very slow. Stochastic gradient descent is a powerful method to iteratively solve the registration problem, but relies for convergence on a proper selection of the optimization step size. This selection is difficult to perform manually, since it depends on the input data, similarity measure and transformation model. The Adaptive Stochastic Gradient Descent (ASGD) method is an automatic approach, but it comes at a high computational cost. In this paper, we propose a new computationally efficient method (fast ASGD) to automatically determine the step size for gradient descent methods, by considering the observed distribution of the voxel displacements between iterations. A relation between the step size and the expectation and variance of the observed distribution is derived. While ASGD has quadratic complexity with respect to the transformation parameters, fast ASGD only has linear complexity. Extensive validation has been performed on different datasets with different modalities, inter/intra subjects, different similarity measures and transformation models. For all experiments, we obtained similar accuracy as ASGD. Moreover, the estimation time of fast ASGD is reduced to a very small value, from 40 s to less than 1 s when the number of parameters is 105, almost 40 times faster. Depending on the registration settings, the total registration time is reduced by a factor of 2.5-7 × for the experiments in this paper.

  8. An Explicit Linear Filtering Solution for the Optimization of Guidance Systems with Statistical Inputs

    NASA Technical Reports Server (NTRS)

    Stewart, Elwood C.

    1961-01-01

    The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.

  9. Vander Lugt Filter Optimization For The Metrology In Industrial And Scientific Research

    NASA Astrophysics Data System (ADS)

    Vukicevic, D.; Demoli, N.; Bistricic, L.

    1980-05-01

    Holographic metrology of the cavitation bubble field has being successfully applied for the inter alia determination of its statistical properties.Bubble diameter's spatial distri-bution is deduced through measurements of each bubble diameter in the reconstructed field. Data reduction procedure is seriously tedious when the inspected volume and its cross-section is in a realistic range usually seen even in the smallest hydrodinamic tunnels.For the development of a hybrid Opto-Digital set-up, which distinguishes bubbles of a specific size from others, and from other particles in the inspected volume, it is of major importance to synthesize the appropriate SMF (Spatialy Matched Filter) for the FPC (Fourier Plane Corrlator). The large dynamical range of the bubble signal spectrum and the limited dynamical range cf photoemulsion combines into a weighting function by which the signal spectrum is multiplied in the holographically synthesized SMF. This weighting function is, to some extent, controlled by the selection of exposure and photo-processing parameters. The coherent optical correlation technique is used for the investigation and measurement of surface wear. Tappet's head surface wear from an IC (Internal Combustion) engine exhibits exponential decay of the optical cross-correlation of its initial and intermittent phases, in relation to the number of wear cycles. The Fourier spectrum of tappet surface shows in addition to a very pronounced DC component, an even spatial distribution.Nevertheless,the weighting function inherent to SMF synthesis must be controlled. Dimensional and statistical metrology of the granular structure of the photosphere of the solar disc is performed fast and easy through optical Fourier analysis. Through appropriate synthesis of optimally weighted SMF, temporal behaviour and decay half-times of the solar granular structure are obtained. In order to achieve acceptable control of the weighted SMF through the holographic procedure, a detailed

  10. Optimization of leaf margins for lung stereotactic body radiotherapy using a flattening filter-free beam

    SciTech Connect

    Wakai, Nobuhide; Sumida, Iori; Otani, Yuki; Suzuki, Osamu; Seo, Yuji; Isohashi, Fumiaki; Yoshioka, Yasuo; Ogawa, Kazuhiko; Hasegawa, Masatoshi

    2015-05-15

    Purpose: The authors sought to determine the optimal collimator leaf margins which minimize normal tissue dose while achieving high conformity and to evaluate differences between the use of a flattening filter-free (FFF) beam and a flattening-filtered (FF) beam. Methods: Sixteen lung cancer patients scheduled for stereotactic body radiotherapy underwent treatment planning for a 7 MV FFF and a 6 MV FF beams to the planning target volume (PTV) with a range of leaf margins (−3 to 3 mm). Forty grays per four fractions were prescribed as a PTV D95. For PTV, the heterogeneity index (HI), conformity index, modified gradient index (GI), defined as the 50% isodose volume divided by target volume, maximum dose (Dmax), and mean dose (Dmean) were calculated. Mean lung dose (MLD), V20 Gy, and V5 Gy for the lung (defined as the volumes of lung receiving at least 20 and 5 Gy), mean heart dose, and Dmax to the spinal cord were measured as doses to organs at risk (OARs). Paired t-tests were used for statistical analysis. Results: HI was inversely related to changes in leaf margin. Conformity index and modified GI initially decreased as leaf margin width increased. After reaching a minimum, the two values then increased as leaf margin increased (“V” shape). The optimal leaf margins for conformity index and modified GI were −1.1 ± 0.3 mm (mean ± 1 SD) and −0.2 ± 0.9 mm, respectively, for 7 MV FFF compared to −1.0 ± 0.4 and −0.3 ± 0.9 mm, respectively, for 6 MV FF. Dmax and Dmean for 7 MV FFF were higher than those for 6 MV FF by 3.6% and 1.7%, respectively. There was a positive correlation between the ratios of HI, Dmax, and Dmean for 7 MV FFF to those for 6 MV FF and PTV size (R = 0.767, 0.809, and 0.643, respectively). The differences in MLD, V20 Gy, and V5 Gy for lung between FFF and FF beams were negligible. The optimal leaf margins for MLD, V20 Gy, and V5 Gy for lung were −0.9 ± 0.6, −1.1 ± 0.8, and −2.1 ± 1.2 mm, respectively, for 7 MV FFF compared

  11. Geometric optimization of a step bearing for a hydrodynamically levitated centrifugal blood pump for the reduction of hemolysis.

    PubMed

    Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2013-09-01

    A hydrodynamically levitated centrifugal blood pump with a semi-open impeller has been developed for mechanical circulatory assistance. However, a narrow bearing gap has the potential to cause hemolysis. The purpose of the present study is to optimize the geometric configuration of the hydrodynamic step bearing in order to reduce hemolysis by expansion of the bearing gap. First, a numerical analysis of the step bearing, based on lubrication theory, was performed to determine the optimal design. Second, in order to assess the accuracy of the numerical analysis, the hydrodynamic forces calculated in the numerical analysis were compared with those obtained in an actual measurement test using impellers having step lengths of 0%, 33%, and 67% of the vane length. Finally, a bearing gap measurement test and a hemolysis test were performed. As a result, the numerical analysis revealed that the hydrodynamic force was the largest when the step length was approximately 70%. The hydrodynamic force calculated in the numerical analysis was approximately equivalent to that obtained in the measurement test. In the measurement test and the hemolysis test, the blood pump having a step length of 67% achieved the maximum bearing gap and reduced hemolysis, as compared with the pumps having step lengths of 0% and 33%. It was confirmed that the numerical analysis of the step bearing was effective, and the developed blood pump having a step length of approximately 70% was found to be a suitable configuration for the reduction of hemolysis.

  12. Multisource modeling of flattening filter free (FFF) beam and the optimization of model parameters

    SciTech Connect

    Cho, Woong; Kielar, Kayla N.; Mok, Ed; Xing Lei; Park, Jeong-Hoon; Jung, Won-Gyun; Suh, Tae-Suk

    2011-04-15

    Purpose: With the introduction of flattening filter free (FFF) linear accelerators to radiation oncology, new analytical source models for a FFF beam applicable to current treatment planning systems is needed. In this work, a multisource model for the FFF beam and the optimization of involved model parameters were designed. Methods: The model is based on a previous three source model proposed by Yang et al. [''A three-source model for the calculation of head scatter factors,'' Med. Phys. 29, 2024-2033 (2002)]. An off axis ratio (OAR) of photon fluence was introduced to the primary source term to generate cone shaped profiles. The parameters of the source model were determined from measured head scatter factors using a line search optimization technique. The OAR of the photon fluence was determined from a measured dose profile of a 40x40 cm{sup 2} field size with the same optimization technique, but a new method to acquire gradient terms for OARs was developed to enhance the speed of the optimization process. The improved model was validated with measured dose profiles from 3x3 to 40x40 cm{sup 2} field sizes at 6 and 10 MV from a TrueBeam STx linear accelerator. Furthermore, planar dose distributions for clinically used radiation fields were also calculated and compared to measurements using a 2D array detector using the gamma index method. Results: All dose values for the calculated profiles agreed with the measured dose profiles within 0.5% at 6 and 10 MV beams, except for some low dose regions for larger field sizes. A slight overestimation was seen in the lower penumbra region near the field edge for the large field sizes by 1%-4%. The planar dose calculations showed comparable passing rates (>98%) when the criterion of the gamma index method was selected to be 3%/3 mm. Conclusions: The developed source model showed good agreements between measured and calculated dose distributions. The model is easily applicable to any other linear accelerator using FFF beams

  13. A Dedicated Inferior Vena Cava Filter Service Line: How to Optimize Your Practice.

    PubMed

    Karp, Jennifer K; Desai, Kush R; Salem, Riad; Ryu, Robert K; Lewandowski, Robert J

    2016-06-01

    Despite the increased placement of retrievable inferior vena cava filters (rIVCFs), efforts to remove these devices are not commensurate. The majority of rIVCFs are left in place beyond their indicated usage, and often are retained permanently. With a growing understanding of the clinical issues associated with these devices, the United States Food and Drug Administration (FDA) has prompted clinicians to remove rIVCF when they are no longer indicated. However, major obstacles exist to filter retrieval, chief among them being poor clinical follow-up. The establishment of a dedicated IVC filter service line, or clinic, has been shown to improve filter retrieval rates. Usage of particular devices, specifically permanent versus retrievable filters, is enhanced by prospective physician consultation. In this article, the rationale behind a dedicated IVC filter service line is presented as well as described the structure and activities of the authors' IVC filter clinic; supporting data will also be provided when appropriate.

  14. Robust independent component analysis by iterative maximization of the kurtosis contrast with algebraic optimal step size.

    PubMed

    Zarzoso, Vicente; Comon, Pierre

    2010-02-01

    Independent component analysis (ICA) aims at decomposing an observed random vector into statistically independent variables. Deflation-based implementations, such as the popular one-unit FastICA algorithm and its variants, extract the independent components one after another. A novel method for deflationary ICA, referred to as RobustICA, is put forward in this paper. This simple technique consists of performing exact line search optimization of the kurtosis contrast function. The step size leading to the global maximum of the contrast along the search direction is found among the roots of a fourth-degree polynomial. This polynomial rooting can be performed algebraically, and thus at low cost, at each iteration. Among other practical benefits, RobustICA can avoid prewhitening and deals with real- and complex-valued mixtures of possibly noncircular sources alike. The absence of prewhitening improves asymptotic performance. The algorithm is robust to local extrema and shows a very high convergence speed in terms of the computational cost required to reach a given source extraction quality, particularly for short data records. These features are demonstrated by a comparative numerical analysis on synthetic data. RobustICA's capabilities in processing real-world data involving noncircular complex strongly super-Gaussian sources are illustrated by the biomedical problem of atrial activity (AA) extraction in atrial fibrillation (AF) electrocardiograms (ECGs), where it outperforms an alternative ICA-based technique.

  15. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  16. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  17. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  18. Autostereoscopic display with 60 ray directions using LCD with optimized color filter layout

    NASA Astrophysics Data System (ADS)

    Koike, Takafumi; Oikawa, Michio; Utsugi, Kei; Kobayashi, Miho; Yamasaki, Masami

    2007-02-01

    We developed a mobile-size integral videography (IV) display that reproduces 60 ray directions. IV is an autostereoscopic video image technique based on integral photography (IP). The IV display consists of a 2-D display and a microlens array. The maximal spatial frequency (MSF) and the number of rays appear to be the most important factors in producing realistic autostereoscopic images. Lens pitch usually determines the MSF of IV displays. The lens pitch and pixel density of the 2-D display determine the number of rays it reproduces. There is a trade-off between the lens pitch and the pixel density. The shape of an elemental image determines the shape of the area of view. We developed an IV display based on the above correlationship. The IV display consists of a 5-inch 900-dpi liquid crystal display (LCD) and a microlens array. The IV display has 60 ray directions with 4 vertical rays and a maximum of 18 horizontal rays. We optimized the color filter on the LCD to reproduce 60 rays. The resolution of the display is 256x192, and the viewing angle is 30 degrees. These parameters are sufficient for mobile game use. Users can interact with the IV display by using a control pad.

  19. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    NASA Astrophysics Data System (ADS)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  20. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  1. Optimization of synthesis and peptization steps to obtain iron oxide nanoparticles with high energy dissipation rates

    NASA Astrophysics Data System (ADS)

    Mérida, Fernando; Chiu-Lam, Andreina; Bohórquez, Ana C.; Maldonado-Camargo, Lorena; Pérez, María-Eglée; Pericchi, Luis; Torres-Lugo, Madeline; Rinaldi, Carlos

    2015-11-01

    Magnetic Fluid Hyperthermia (MFH) uses heat generated by magnetic nanoparticles exposed to alternating magnetic fields to cause a temperature increase in tumors to the hyperthermia range (43-47 °C), inducing apoptotic cancer cell death. As with all cancer nanomedicines, one of the most significant challenges with MFH is achieving high nanoparticle accumulation at the tumor site. This motivates development of synthesis strategies that maximize the rate of energy dissipation of iron oxide magnetic nanoparticles, preferable due to their intrinsic biocompatibility. This has led to development of synthesis strategies that, although attractive from the point of view of chemical elegance, may not be suitable for scale-up to quantities necessary for clinical use. On the other hand, to date the aqueous co-precipitation synthesis, which readily yields gram quantities of nanoparticles, has only been reported to yield sufficiently high specific absorption rates after laborious size selective fractionation. This work focuses on improvements to the aqueous co-precipitation of iron oxide nanoparticles to increase the specific absorption rate (SAR), by optimizing synthesis conditions and the subsequent peptization step. Heating efficiencies up to 1048 W/gFe (36.5 kA/m, 341 kHz; ILP=2.3 nH m2 kg-1) were obtained, which represent one of the highest values reported for iron oxide particles synthesized by co-precipitation without size-selective fractionation. Furthermore, particles reached SAR values of up to 719 W/gFe (36.5 kA/m, 341 kHz; ILP=1.6 nH m2 kg-1) when in a solid matrix, demonstrating they were capable of significant rates of energy dissipation even when restricted from physical rotation. Reduction in energy dissipation rate due to immobilization has been identified as an obstacle to clinical translation of MFH. Hence, particles obtained with the conditions reported here have great potential for application in nanoscale thermal cancer therapy.

  2. Toward an Optimal Position for IVC Filters: Computational Modeling of the Impact of Renal Vein Inflow

    SciTech Connect

    Wang, S L; Singer, M A

    2009-07-13

    The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.

  3. Optimization by decomposition: A step from hierarchic to non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    A new, non-hierarchic decomposition is formulated for system optimization that uses system analysis, system sensitivity analysis, temporary decoupled optimizations performed in the design subspaces corresponding to the disciplines and subsystems, and a coordination optimization concerned with the redistribution of responsibility for the constraint satisfaction and design trades among the disciplines and subsystems. The approach amounts to a variation of the well-known method of subspace optimization modified so that the analysis of the entire system is eliminated from the subspace optimization and the subspace optimizations may be performed concurrently.

  4. Optimizing the anode-filter combination in the sense of image quality and average glandular dose in digital mammography

    NASA Astrophysics Data System (ADS)

    Varjonen, Mari; Strömmer, Pekka

    2008-03-01

    This paper presents the optimized image quality and average glandular dose in digital mammography, and provides recommendations concerning anode-filter combinations in digital mammography, which is based on amorphous selenium (a-Se) detector technology. The full field digital mammography (FFDM) system based on a-Se technology, which is also a platform of tomosynthesis prototype, was used in this study. X-ray tube anode-filter combinations, which we studied, were tungsten (W) - rhodium (Rh) and tungsten (W) - silver (Ag). Anatomically adaptable fully automatic exposure control (AAEC) was used. The average glandular doses (AGD) were calculated using a specific program developed by Planmed, which automates the method described by Dance et al. Image quality was evaluated in two different ways: a subjective image quality evaluation, and contrast and noise analysis. By using W-Rh and W-Ag anode-filter combinations can be achieved a significantly lower average glandular dose compared with molybdenum (Mo) - molybdenum (Mo) or Mo-Rh. The average glandular dose reduction was achieved from 25 % to 60 %. In the future, the evaluation will concentrate to study more filter combinations and the effect of higher kV (>35 kV) values, which seems be useful while optimizing the dose in digital mammography.

  5. Near-Diffraction-Limited Operation of Step-Index Large-Mode-Area Fiber Lasers Via Gain Filtering

    SciTech Connect

    Marciante, J.R.; Roides, R.G.; Shkunov, V.V.; Rockwell, D.A.

    2010-06-04

    We present, for the first time to our knowledge, an explicit experimental comparison of beam quality in conventional and confined-gain multimode fiber lasers. In the conventional fiber laser, beam quality degrades with increasing output power. In the confined-gain fiber laser, the beam quality is good and does not degrade with output power. Gain filtering of higher-order modes in 28 μm diameter core fiber lasers is demonstrated with a beam quality of M^2 = 1.3 at all pumping levels. Theoretical modeling is shown to agree well with experimentally observed trends.

  6. Characterization and optimization of acoustic filter performance by experimental design methodology.

    PubMed

    Gorenflo, Volker M; Ritter, Joachim B; Aeschliman, Dana S; Drouin, Hans; Bowen, Bruce D; Piret, James M

    2005-06-20

    Acoustic cell filters operate at high separation efficiencies with minimal fouling and have provided a practical alternative for up to 200 L/d perfusion cultures. However, the operation of cell retention systems depends on several settings that should be adjusted depending on the cell concentration and perfusion rate. The impact of operating variables on the separation efficiency performance of a 10-L acoustic separator was characterized using a factorial design of experiments. For the recirculation mode of separator operation, bioreactor cell concentration, perfusion rate, power input, stop time and recirculation ratio were studied using a fractional factorial 2(5-1) design, augmented with axial and center point runs. One complete replicate of the experiment was carried out, consisting of 32 more runs, at 8 runs per day. Separation efficiency was the primary response and it was fitted by a second-order model using restricted maximum likelihood estimation. By backward elimination, the model equation for both experiments was reduced to 14 significant terms. The response surface model for the separation efficiency was tested using additional independent data to check the accuracy of its predictions, to explore robust operation ranges and to optimize separator performance. A recirculation ratio of 1.5 and a stop time of 2 s improved the separator performance over a wide range of separator operation. At power input of 5 W the broad range of robust high SE performance (95% or higher) was raised to over 8 L/d. The reproducible model testing results over a total period of 3 months illustrate both the stable separator performance and the applicability of the model developed to long-term perfusion cultures.

  7. Nature-inspired optimization of quasicrystalline arrays and all-dielectric optical filters and metamaterials

    NASA Astrophysics Data System (ADS)

    Namin, Frank Farhad A.

    (photonic resonance) and the plasmonic response of the spheres (plasmonic resonance). In particular the couplings between the photonic and plasmonic modes are studied. In periodic arrays this coupling leads to the formation of a so called photonic-plasmonic hybrid mode. The formation of hybrid modes is studied in quasicrystalline arrays. Quasicrystalline structures in essence possess several periodicities which in some cases can lead to the formation of multiple hybrid modes with wider bandwidths. It is also demonstrated that the performance of these arrays can be further enhanced by employing a perturbation method. The second property considered is local field enhancements in quasicrystalline arrays of gold nanospheres. It will be shown that despite a considerably smaller filling factor quasicrystalline arrays generate larger local field enhancements which can be even further enhanced by optimally placing perturbing spheres within the prototiles that comprise the aperiodic arrays. The second thrust of research in this dissertation focuses on designing all-dielectric filters and metamaterial coatings for the optical range. In higher frequencies metals tend to have a high loss and thus they are not suitable for many applications. Hence dielectrics are used for applications in optical frequencies. In particular we focus on designing two types of structures. First a near-perfect optical mirror is designed. The design is based on optimizing a subwavelength periodic dielectric grating to obtain appropriate effective parameters that will satisfy the desired perfect mirror condition. Second, a broadband anti-reflective all-dielectric grating with wide field of view is designed. The second design is based on a new computationally efficient genetic algorithm (GA) optimization method which shapes the sidewalls of the grating based on optimizing the roots of polynomial functions.

  8. Signal-to-Noise Enhancement Techniques for Quantum Cascade Absorption Spectrometers Employing Optimal Filtering and Other Approaches

    SciTech Connect

    Disselkamp, Robert S.; Kelly, James F.; Sams, Robert L.; Anderson, Gordon A.

    2002-09-01

    Optical feedback to the laser source in tunable diode laser spectroscopy (TDLS) is known to create intensity modulation noise due to elatoning and optical feedback (i.e., multiplicative technical noise) that usually limits spectral signal-to-noise (S/N). The large technical noise often limits absorption spectroscopy to noise floors 100-fold greater than the Poisson shot noise limit due to fluctuations in the laser intensity. The high output powers generated from quantum cascade (QC) lasers, along with their high gain, makes these injection laser systems especially susceptible to technical noise. In this article we discuss a method of using optimal filtering to reduce technical noise. We have observed S/N enhancements ranging from {approx}20% to a factor of {approx}50. The degree to which optimal filtering will enhance S/N depends on the similarity between the Fourier components of the technical noise and those of the signal, with lower S/N enhancements observed for more similar Fourier decompositions of the signal and technical noise. We also examine the linearity of optimal filtered spectra for both time and intensity. This was accomplished by creating a synthetic spectrum for the species being studied (CH4, N2O, CO2, H2O in ambient air) utilizing line-positions and line-widths with an assumed Voight-profile from a previous database (HITRAN). Agreement better than 0.036% in wavenumber, and 1.64% in intensity (up to a 260-fold intensity ratio employed), was observed. Our results suggest that rapid ex post facto digital optimal filtering can be used to enhance S/N for routine trace gas detection.

  9. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: Spectral optimization and preliminary phantom measurement

    SciTech Connect

    Saito, Masatoshi

    2007-11-15

    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.

  10. Improvement of hemocompatibility for hydrodynamic levitation centrifugal pump by optimizing step bearings.

    PubMed

    Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2011-01-01

    We have developed a hydrodynamic levitation centrifugal blood pump with a semi-open impeller for a mechanically circulatory assist. The impeller levitated with original hydrodynamic bearings without any complicated control and sensors. However, narrow bearing gap has the potential for causing hemolysis. The purpose of the study is to investigate the geometric configuration of the hydrodynamic step bearing to minimize hemolysis by expansion of the bearing gap. Firstly, we performed the numerical analysis of the step bearing based on Reynolds equation, and measured the actual hydrodynamic force of the step bearing. Secondly, the bearing gap measurement test and the hemolysis test were performed to the blood pumps, whose step length were 0 %, 33 % and 67 % of the vane length respectively. As a result, in the numerical analysis, the hydrodynamic force was the largest, when the step bearing was around 70 %. In the actual evaluation tests, the blood pump having step 67 % obtained the maximum bearing gap, and was able to improve the hemolysis, compared to those having step 0% and 33%. We confirmed that the numerical analysis of the step bearing worked effectively, and the blood pump having step 67 % was suitable configuration to minimize hemolysis, because it realized the largest bearing gap. PMID:22254562

  11. Improvement of hemocompatibility for hydrodynamic levitation centrifugal pump by optimizing step bearings.

    PubMed

    Kosaka, Ryo; Yada, Toru; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2011-01-01

    We have developed a hydrodynamic levitation centrifugal blood pump with a semi-open impeller for a mechanically circulatory assist. The impeller levitated with original hydrodynamic bearings without any complicated control and sensors. However, narrow bearing gap has the potential for causing hemolysis. The purpose of the study is to investigate the geometric configuration of the hydrodynamic step bearing to minimize hemolysis by expansion of the bearing gap. Firstly, we performed the numerical analysis of the step bearing based on Reynolds equation, and measured the actual hydrodynamic force of the step bearing. Secondly, the bearing gap measurement test and the hemolysis test were performed to the blood pumps, whose step length were 0 %, 33 % and 67 % of the vane length respectively. As a result, in the numerical analysis, the hydrodynamic force was the largest, when the step bearing was around 70 %. In the actual evaluation tests, the blood pump having step 67 % obtained the maximum bearing gap, and was able to improve the hemolysis, compared to those having step 0% and 33%. We confirmed that the numerical analysis of the step bearing worked effectively, and the blood pump having step 67 % was suitable configuration to minimize hemolysis, because it realized the largest bearing gap.

  12. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition. PMID:26681183

  13. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

  14. Optimal filter design for shielded and unshielded ambient noise reduction in fetal magnetocardiography.

    PubMed

    Comani, S; Mantini, D; Alleva, G; Di Luzio, S; Romani, G L

    2005-12-01

    The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz. PMID:16306648

  15. Optimal filter design for shielded and unshielded ambient noise reduction in fetal magnetocardiography

    NASA Astrophysics Data System (ADS)

    Comani, S.; Mantini, D.; Alleva, G.; Di Luzio, S.; Romani, G. L.

    2005-12-01

    The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz.

  16. Identifying the preferred subset of enzymatic profiles in nonlinear kinetic metabolic models via multiobjective global optimization and Pareto filters.

    PubMed

    Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Sorribas, Albert; Jiménez, Laureano

    2012-01-01

    Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the

  17. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  18. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  19. Optimization of the filter parameters in (99m)Tc myocardial perfusion SPECT studies: the formulation of flowchart.

    PubMed

    Shibutani, Takayuki; Onoguchi, Masahisa; Yamada, Tomoki; Kamida, Hiroki; Kunishita, Kohei; Hayashi, Yuuki; Nakajima, Tadashi; Kinuya, Seigo

    2016-06-01

    Myocardial perfusion single photon emission computed tomography (SPECT) is typically subject to a variation in image quality due to the use of different acquisition protocols, image reconstruction parameters and image display settings by each institution. One of the principal image reconstruction parameters is the Butterworth filter cut-off frequency, a parameter strongly affecting the quality of myocardial images. The objective of this study was to formulate a flowchart for the determination of the optimal parameters of the Butterworth filter for filtered back projection (FBP), ordered subset expectation maximization (OSEM) and collimator-detector response compensation OSEM (CDR-OSEM) methods using the evaluation system of the myocardial image based on technical grounds phantom. SPECT studies were acquired for seven simulated defects where the average counts of the normal myocardial components of 45° left anterior oblique projections were approximately 10-120 counts/pixel. These SPECT images were then reconstructed by FBP, OSEM and CDR-OSEM methods. Visual and quantitative assessment of short axis images were performed for the defect and normal parts. Finally, we formulated a flowchart indicating the optimal image processing procedure for SPECT images. Correlation between normal myocardial counts and the optimal cut-off frequency could be represented as a regression expression, which had high or medium coefficient of determination. We formulated the flowchart in order to optimize the image reconstruction parameters based on a comprehensive assessment, which enabled us to perform objectively processing. Furthermore, the usefulness of image reconstruction using the flowchart was demonstrated by a clinical case.

  20. SU-E-I-57: Evaluation and Optimization of Effective-Dose Using Different Beam-Hardening Filters in Clinical Pediatric Shunt CT Protocol

    SciTech Connect

    Gill, K; Aldoohan, S; Collier, J

    2014-06-01

    Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measure CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.

  1. Design and optimization of fundamental mode filters based on long-period fiber gratings

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Yang; Wei, Jin; Sheng, Yong; Ren, Nai-Fei

    2016-07-01

    A segment of long-period fiber grating (LPFG) that can selectively filter the fundamental mode in the few-mode optical fiber is proposed. By applying an appropriate chosen surrounding material and an apodized configuration of LPFG, high fundamental mode loss and low high-order core mode loss can be achieved simultaneously. In addition, we propose a method of cascading LPFGs with different periods to expand the bandwidth of the mode filter. Numerical simulation shows that the operating bandwidth of the cascade structure can be as large as 23 nm even if the refractive index of the surrounding liquid varies with the environment temperature.

  2. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first

  3. Optimal optical filters of fluorescence excitation and emission for poultry fecal detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Purpose: An analytic method to design excitation and emission filters of a multispectral fluorescence imaging system is proposed and was demonstrated in an application to poultry fecal inspection. Methods: A mathematical model of a multispectral imaging system is proposed and its system parameters, ...

  4. Optimization of plasma parameters with magnetic filter field and pressure to maximize H- ion density in a negative hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young; Chung, Kyoung-Jae; Hwang, Y. S.

    2016-02-01

    Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuir probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H- populations for various filter field strengths and pressures. Enhanced H- population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H- sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region.

  5. Optimization of plasma parameters with magnetic filter field and pressure to maximize H⁻ ion density in a negative hydrogen ion source.

    PubMed

    Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young; Chung, Kyoung-Jae; Hwang, Y S

    2016-02-01

    Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuir probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H(-) populations for various filter field strengths and pressures. Enhanced H(-) population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H(-) sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region. PMID:26932018

  6. Optimization of a femtosecond Ti : sapphire amplifier using a acouto-optic programmable dispersive filter and a genetic algorithm.

    SciTech Connect

    Korovyanko, O. J.; Rey-de-Castro, R.; Elles, C. G.; Crowell, R. A.; Li, Y.

    2006-01-01

    The temporal output of a Ti:Sapphire laser system has been optimized using an acousto-optic programmable dispersive filter and a genetic algorithm. In-situ recording the evolution of spectral phase, amplitude and temporal pulse profile for each iteration of the algorithm using SPIDER shows that we are able to lock the spectral phase of the laser pulse within a narrow margin. By using the second harmonic of the CPA laser as feedback for the genetic algorithm, it has been demonstrated that severe mismatch between the compressor and stretcher can be compensated for in a short period of time.

  7. Two-step synthesis of per-O-acetylfuranoses: optimization and rationalization.

    PubMed

    Dureau, Rémy; Legentil, Laurent; Daniellou, Richard; Ferrières, Vincent

    2012-02-01

    A simple two-step procedure yielding peracetylated furanoses directly from free aldoses was implemented. Key steps of the method are (i) highly selective formation of per-O-(tert-butyldimethylsilyl)furanoses and (ii) their clean conversion into acetyl ones without isomerization. This approach was easily applied to galactose and structurally related carbohydrates such as arabinose, fucose, methyl galacturonate and N-acetylgalactosamine to give the corresponding peracetylated targets. The success of this procedure relied on the control of at least three parameters: (i) the tautomeric equilibrium of the starting unprotected oses, (ii) the steric hindrance of both targeted furanoses and silylating agent, and finally, (iii) the reactivity of each soft nucleophile during the protecting group interconversion.

  8. Optimization of 3D laser scanning speed by use of combined variable step

    NASA Astrophysics Data System (ADS)

    Garcia-Cruz, X. M.; Sergiyenko, O. Yu.; Tyrsa, Vera; Rivas-Lopez, M.; Hernandez-Balbuena, D.; Rodriguez-Quiñonez, J. C.; Basaca-Preciado, L. C.; Mercorelli, P.

    2014-03-01

    The problem of 3D TVS slow functioning caused by constant small scanning step becomes its solution in the presented research. It can be achieved by combined scanning step application for the fast search of n obstacles in unknown surroundings. Such a problem is of keynote importance in automatic robot navigation. To maintain a reasonable speed robots must detect dangerous obstacles as soon as possible, but all known scanners able to measure distances with sufficient accuracy are unable to do it in real time. So, the related technical task of the scanning with variable speed and precise digital mapping only for selected spatial sectors is under consideration. A wide range of simulations in MATLAB 7.12.0 of several variants of hypothetic scenes with variable n obstacles in each scene (including variation of shapes and sizes) and scanning with incremented angle value (0.6° up to 15°) is provided. The aim of such simulation was to detect which angular values of interval still permit getting the maximal information about obstacles without undesired time losses. Three of such local maximums were obtained in simulations and then rectified by application of neuronal network formalism (Levenberg-Marquradt Algorithm). The obtained results in its turn were applied to MET (Micro-Electro-mechanical Transmission) design for practical realization of variable combined step scanning on an experimental prototype of our previously known laser scanner.

  9. Optimal discrimination and classification of neuronal action potential waveforms from multiunit, multichannel recordings using software-based linear filters.

    PubMed

    Gozani, S N; Miller, J P

    1994-04-01

    We describe advanced protocols for the discrimination and classification of neuronal spike waveforms within multichannel electrophysiological recordings. The programs are capable of detecting and classifying the spikes from multiple, simultaneously active neurons, even in situations where there is a high degree of spike waveform superposition on the recording channels. The protocols are based on the derivation of an optimal linear filter for each individual neuron. Each filter is tuned to selectively respond to the spike waveform generated by the corresponding neuron, and to attenuate noise and the spike waveforms from all other neurons. The protocol is essentially an extension of earlier work [1], [13], [18]. However, the protocols extend the power and utility of the original implementations in two significant respects. First, a general single-pass automatic template estimation algorithm was derived and implemented. Second, the filters were implemented within a software environment providing a greatly enhanced functional organization and user interface. The utility of the analysis approach was demonstrated on samples of multiunit electrophysiological recordings from the cricket abdominal nerve cord.

  10. Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-dimensional Bilateral Filter

    SciTech Connect

    Bethel, E. Wes; Bethel, E. Wes

    2012-01-06

    This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.

  11. Design of FIR digital filters for pulse shaping and channel equalization using time-domain optimization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Vaughn, G. L.

    1974-01-01

    Three algorithms are developed for designing finite impulse response digital filters to be used for pulse shaping and channel equalization. The first is the Minimax algorithm which uses linear programming to design a frequency-sampling filter with a pulse shape that approximates the specification in a minimax sense. Design examples are included which accurately approximate a specified impulse response with a maximum error of 0.03 using only six resonators. The second algorithm is an extension of the Minimax algorithm to design preset equalizers for channels with known impulse responses. Both transversal and frequency-sampling equalizer structures are designed to produce a minimax approximation of a specified channel output waveform. Examples of these designs are compared as to the accuracy of the approximation, the resultant intersymbol interference (ISI), and the required transmitted energy. While the transversal designs are slightly more accurate, the frequency-sampling designs using six resonators have smaller ISI and energy values.

  12. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  13. Adaptive-filter/feature-orthogonalization processing string for optimal LLRT mine classfication in side-scan sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Libera, Peter; Fernandez, Manuel F.; Dobeck, Gerald J.

    1996-05-01

    An automatic, robust, adaptive clutter suppression, mine detection and classification processing string has been developed and applied to side-scan sonar imagery data. The overall processing string includes data pre-processing, adaptive clutter filtering (ACF), 2D normalization, detection, feature extraction, and classification processing blocks. The data pre-processing block contains automatic gain control and data decimation processing. The ACF technique designs a 2D adaptive range-crossrange linear FIR filter which is optimal in the Least Squares sense, simultaneously suppressing the background clutter while preserving an average peak target signature (normalized shape) computed a priori using training set data. A multiple reference ACF algorithm version was utilized to account for multiple target shapes (due to different mine types, multiple target aspect angles, etc.). The detection block consists of thresholding, clustering of exceedances and limiting their number, and a secondary thresholding process. Following feature extraction, the classification block applies a novel transformation to the data, which orthogonalizes the features and enables an efficient application of the optimal log-likelihood-ratio-test (LLRT) classification rule. The utility of the overall processing string was demonstrated with two side-scan sonar data sets. The ACF/feature orthogonalization based LLRT mine classification processing string provided average probability of correct mine classification and false alarm rate performance similar to that obtained when utilizing an expert sonar operator.

  14. Pareto optimality between width of central lobe and peak sidelobe intensity in the far-field pattern of lossless phase-only filters for enhancement of transverse resolution.

    PubMed

    Mukhopadhyay, Somparna; Hazra, Lakshminarayan

    2015-11-01

    Resolution capability of an optical imaging system can be enhanced by reducing the width of the central lobe of the point spread function. Attempts to achieve the same by pupil plane filtering give rise to a concomitant increase in sidelobe intensity. The mutual exclusivity between these two objectives may be considered as a multiobjective optimization problem that does not have a unique solution; rather, a class of trade-off solutions called Pareto optimal solutions may be generated. Pareto fronts in the synthesis of lossless phase-only pupil plane filters to achieve superresolution with prespecified lower limits for the Strehl ratio are explored by using the particle swarm optimization technique.

  15. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    PubMed Central

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  16. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    PubMed

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  17. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1991-01-01

    Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  18. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Perry, Boyd, III; Pototzky, Anthony S.

    1991-01-01

    This paper describes and illustrates two matched-filter-theory based schemes for obtaining maximized and time-correlated gust-loads for a nonlinear airplane. The first scheme is computationally fast because it uses a simple one-dimensional search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multidimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  19. Rod-filter-field optimization of the J-PARC RF-driven H{sup −} ion source

    SciTech Connect

    Ueno, A. Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.

    2015-04-08

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H{sup −} ion beam of 60mA within normalized emittances of 1.5πmm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500μs×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H{sup −} ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H{sup −} ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM’s gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H{sup −} ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM’s cross-section (magnetmotive force) was indispensable for easy operation with the temperature (T{sub PE}) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM’s cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for T{sub PE} around 60°C.

  20. Optimizing multi-step B-side charge separation in photosynthetic reaction centers from Rhodobacter capsulatus.

    PubMed

    Faries, Kaitlyn M; Kressel, Lucas L; Dylla, Nicholas P; Wander, Marc J; Hanson, Deborah K; Holten, Dewey; Laible, Philip D; Kirmaier, Christine

    2016-02-01

    Using high-throughput methods for mutagenesis, protein isolation and charge-separation functionality, we have assayed 40 Rhodobacter capsulatus reaction center (RC) mutants for their P(+)QB(-) yield (P is a dimer of bacteriochlorophylls and Q is a ubiquinone) as produced using the normally inactive B-side cofactors BB and HB (where B is a bacteriochlorophyll and H is a bacteriopheophytin). Two sets of mutants explore all possible residues at M131 (M polypeptide, native residue Val near HB) in tandem with either a fixed His or a fixed Asn at L181 (L polypeptide, native residue Phe near BB). A third set of mutants explores all possible residues at L181 with a fixed Glu at M131 that can form a hydrogen bond to HB. For each set of mutants, the results of a rapid millisecond screening assay that probes the yield of P(+)QB(-) are compared among that set and to the other mutants reported here or previously. For a subset of eight mutants, the rate constants and yields of the individual B-side electron transfer processes are determined via transient absorption measurements spanning 100 fs to 50 μs. The resulting ranking of mutants for their yield of P(+)QB(-) from ultrafast experiments is in good agreement with that obtained from the millisecond screening assay, further validating the efficient, high-throughput screen for B-side transmembrane charge separation. Results from mutants that individually show progress toward optimization of P(+)HB(-)→P(+)QB(-) electron transfer or initial P*→P(+)HB(-) conversion highlight unmet challenges of optimizing both processes simultaneously. PMID:26658355

  1. Optimization of conditions for the single step IMAC purification of miraculin from Synsepalum dulcificum.

    PubMed

    He, Zuxing; Tan, Joo Shun; Lai, Oi Ming; Ariff, Arbakariya B

    2015-08-15

    In this study, the methods for extraction and purification of miraculin from Synsepalum dulcificum were investigated. For extraction, the effect of different extraction buffers (phosphate buffer saline, Tris-HCl and NaCl) on the extraction efficiency of total protein was evaluated. Immobilized metal ion affinity chromatography (IMAC) with nickel-NTA was used for the purification of the extracted protein, where the influence of binding buffer pH, crude extract pH and imidazole concentration in elution buffer upon the purification performance was explored. The total amount of protein extracted from miracle fruit was found to be 4 times higher using 0.5M NaCl as compared to Tris-HCl and phosphate buffer saline. On the other hand, the use of Tris-HCl as binding buffer gave higher purification performance than sodium phosphate and citrate-phosphate buffers in IMAC system. The optimum purification condition of miraculin using IMAC was achieved with crude extract at pH 7, Tris-HCl binding buffer at pH 7 and the use of 300 mM imidazole as elution buffer, which gave the overall yield of 80.3% and purity of 97.5%. IMAC with nickel-NTA was successfully used as a single step process for the purification of miraculin from crude extract of S. dulcificum. PMID:25794715

  2. Optimal spatial filtering and transfer function for SAR ocean wave spectra

    NASA Technical Reports Server (NTRS)

    Beal, R. C.; Tilley, D. G.

    1981-01-01

    The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.

  3. An optimal algorithm based on extended kalman filter and the data fusion for infrared touch overlay

    NASA Astrophysics Data System (ADS)

    Zhou, AiGuo; Cheng, ShuYi; Pan, Qiang Biao; Sun, Dong Yu

    2016-01-01

    Current infrared touch overlay has problems on the touch point recognition which bring some burrs on the touch trajectory. This paper uses the target tracking algorithm to improve the recognition and smoothness of infrared touch overlay. In order to deal with the nonlinear state estimate problem for touch point tracking, we use the extended Kalman filter in the target tracking algorithm. And we also use the data fusion algorithm to match the estimate value with the original target trajectory. The experimental results of the infrared touch overlay demonstrate that the proposed target tracking approach can improve the touch point recognition of the infrared touch overlay and achieve much smoother tracking trajectory than the existing tracking approach.

  4. Permeability optimization and performance evaluation of hot aerosol filters made using foam incorporated alumina suspension.

    PubMed

    Innocentini, Murilo D M; Rodrigues, Vanessa P; Romano, Roberto C O; Pileggi, Rafael G; Silva, Gracinda M C; Coury, José R

    2009-02-15

    Porous ceramic samples were prepared from aqueous foam incorporated alumina suspension for application as hot aerosol filtering membrane. The procedure for establishment of membrane features required to maintain a desired flow condition was theoretically described and experimental work was designed to prepare ceramic membranes to meet the predicted criteria. Two best membranes, thus prepared, were selected for permeability tests up to 700 degrees C and their total and fractional collection efficiencies were experimentally evaluated. Reasonably good performance was achieved at room temperature, while at 700 degrees C, increased permeability was obtained with significant reduction in collection efficiency, which was explained by a combination of thermal expansion of the structure and changes in the gas properties.

  5. Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter

    NASA Astrophysics Data System (ADS)

    Ouerhani, Y.; Jridi, M.; Alfalou, A.; Brosseau, C.

    2013-02-01

    The key outcome of this work is to propose and validate a fast and robust correlation scheme for face recognition applications. The robustness of this fast correlator is ensured by an adapted pre-processing step for the target image allowing us to minimize the impact of its (possibly noisy and varying) amplitude spectrum information. A segmented composite filter is optimized, at the very outset of its fabrication, by weighting each reference with a specific coefficient which is proportional to the occurrence probability. A hierarchical classification procedure (called two-level decision tree learning approach) is also used in order to speed up the recognition procedure. Experimental results validating our approach are obtained with a prototype based on GPU implementation of the all-numerical correlator using the NVIDIA GPU GeForce 8400GS processor and test samples from the Pointing Head Pose Image Database (PHPID), e.g. true recognition rates larger than 85% with a run time lower than 120 ms have been obtained using fixed images from the PHPID, true recognition rates larger than 77% using a real video sequence with 2 frame per second when the database contains 100 persons. Besides, it has been shown experimentally that the use of more recent GPU processor like NVIDIA-GPU Quadro FX 770M can perform the recognition of 4 frame per second with the same length of database.

  6. Bounds on the performance of particle filters

    NASA Astrophysics Data System (ADS)

    Snyder, C.; Bengtsson, T.

    2014-12-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. A persistent issue with all particle filters is degeneracy of the importance weights, where one or a few particles receive almost all the weight. Considering single-step filters such as the equivalent-weights or implicit particle filters (that is, those in which the particles and weights at time tk depend only on the observations at tk and the particles and weights at tk-1), two results provide a bound on their performance. First, the optimal proposal minimizes the variance of the importance weights not only over draws of the particles at tk, but also over draws from the joint proposal for tk-1 and tk. This shows that a particle filter using the optimal proposal will have minimal degeneracy relative to all other single-step filters. Second, the asymptotic results of Bengtsson et al. (2008) and Snyder et al. (2008) also hold rigorously for the optimal proposal in the case of linear, Gaussian systems. The number of particles necessary to avoid degeneracy must increase exponentially with the variance of the incremental importance weights. In the simplest examples, that variance is proportional to the dimension of the system, though in general it depends on other factors, including the characteristics of the observing network. A rough estimate indicates that single-step particle filter applied to global numerical weather prediction will require very large numbers of particles.

  7. Shuttle filter study. Volume 1: Characterization and optimization of filtration devices

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A program to develop a new technology base for filtration equipment and comprehensive fluid particulate contamination management techniques was conducted. The study has application to the systems used in the space shuttle and space station projects. The scope of the program is as follows: (1) characterization and optimization of filtration devices, (2) characterization of contaminant generation and contaminant sensitivity at the component level, and (3) development of a comprehensive particulate contamination management plane for space shuttle fluid systems.

  8. Designing spectrum-splitting dichroic filters to optimize current-matched photovoltaics.

    PubMed

    Miles, Alexander; Cocilovo, Byron; Wheelwright, Brian; Pan, Wei; Tweet, Doug; Norwood, Robert A

    2016-03-10

    We have developed an approach for designing a dichroic coating to optimize performance of current-matched multijunction photovoltaic cells while diverting unused light. By matching the spectral responses of the photovoltaic cells and current matching them, substantial improvement to system efficiencies is shown to be possible. A design for use in a concentrating hybrid solar collector was produced by this approach, and is presented. Materials selection, design methodology, and tilt behavior on a curved substrate are discussed. PMID:26974772

  9. Designing spectrum-splitting dichroic filters to optimize current-matched photovoltaics.

    PubMed

    Miles, Alexander; Cocilovo, Byron; Wheelwright, Brian; Pan, Wei; Tweet, Doug; Norwood, Robert A

    2016-03-10

    We have developed an approach for designing a dichroic coating to optimize performance of current-matched multijunction photovoltaic cells while diverting unused light. By matching the spectral responses of the photovoltaic cells and current matching them, substantial improvement to system efficiencies is shown to be possible. A design for use in a concentrating hybrid solar collector was produced by this approach, and is presented. Materials selection, design methodology, and tilt behavior on a curved substrate are discussed.

  10. Influence of simulation time-step (temporal-scale) on optimal parameter estimation and runoff prediction performance in hydrological modeling

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel

    2015-04-01

    Nowadays, most hydrological catchment models are designed to allow their use for streamflow simulation at different time-scales. While this permits models to be applied for broader purposes, it can also be a source of error in hydrological processes simulation at catchment scale. Those errors seem not to affect significantly simple conceptual models, but this flexibility may lead to large behavior errors in physically based models. Equations used in processes such as those related to soil moisture time-variation are usually representative at certain time-scales but they may not characterize properly water transfer in soil layers at larger scales. This effect is especially relevant as we move from detailed hourly scale to daily time-step, which are common time scales used at catchment streamflow simulation for different research and management practices purposes. This study aims to provide an objective methodology to identify the degree of similarity of optimal parameter values when hydrological catchment model calibration is developed at different time-scales. Thus, providing information for an informed discussion of physical parameter significance on hydrological models. In this research, we analyze the influence of time scale simulation on: 1) the optimal values of six highly sensitive parameters of the TOPLATS model and 2) the streamflow simulation efficiency, while optimization is carried out at different time scales. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) has been applied on its lumped version on three catchments of varying size located in northern Spain. The model has its basis on shallow groundwater gradients (related to local topography) that set up spatial patterns of soil moisture and are assumed to control infiltration and runoff during storm events and evaporation and drainage in between storm events. The model calculates the saturated portion of the catchment at each time step based on Topographical Index (TI) intervals. Surface

  11. Drying process optimization for an API solvate using heat transfer model of an agitated filter dryer.

    PubMed

    Nere, Nandkishor K; Allen, Kimberley C; Marek, James C; Bordawekar, Shailendra V

    2012-10-01

    Drying an early stage active pharmaceutical ingredient candidate required excessively long cycle times in a pilot plant agitated filter dryer. The key to faster drying is to ensure sufficient heat transfer and minimize mass transfer limitations. Designing the right mixing protocol is of utmost importance to achieve efficient heat transfer. To this order, a composite model was developed for the removal of bound solvent that incorporates models for heat transfer and desolvation kinetics. The proposed heat transfer model differs from previously reported models in two respects: it accounts for the effects of a gas gap between the vessel wall and solids on the overall heat transfer coefficient, and headspace pressure on the mean free path length of the inert gas and thereby on the heat transfer between the vessel wall and the first layer of solids. A computational methodology was developed incorporating the effects of mixing and headspace pressure to simulate the drying profile using a modified model framework within the Dynochem software. A dryer operational protocol was designed based on the desolvation kinetics, thermal stability studies of wet and dry cake, and the understanding gained through model simulations, resulting in a multifold reduction in drying time.

  12. Chaos particle swarm optimization combined with circular median filtering for geophysical parameters retrieval from Windsat

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wang, Zhenzhan; Shi, Hanqing; Long, Zhiyong; Du, Huadong

    2016-08-01

    This paper established a geophysical retrieval algorithm for sea surface wind vector, sea surface temperature, columnar atmospheric water vapor, and columnar cloud liquid water from WindSat, using the measured brightness temperatures and a matchup database. To retrieve the wind vector, a chaotic particle swarm approach was used to determine a set of possible wind vector solutions which minimize the difference between the forward model and the WindSat observations. An adjusted circular median filtering function was adopted to remove wind direction ambiguity. The validation of the wind speed, wind direction, sea surface temperature, columnar atmospheric water vapor, and columnar liquid cloud water indicates that this algorithm is feasible and reasonable and can be used to retrieve these atmospheric and oceanic parameters. Compared with moored buoy data, the RMS errors for wind speed and sea surface temperature were 0.92 m s-1 and 0.88°C, respectively. The RMS errors for columnar atmospheric water vapor and columnar liquid cloud water were 0.62 mm and 0.01 mm, respectively, compared with F17 SSMIS results. In addition, monthly average results indicated that these parameters are in good agreement with AMSR-E results. Wind direction retrieval was studied under various wind speed conditions and validated by comparing to the QuikSCAT measurements, and the RMS error was 13.3°. This paper offers a new approach to the study of ocean wind vector retrieval using a polarimetric microwave radiometer.

  13. Metrics for comparing plasma mass filters

    SciTech Connect

    Fetterman, Abraham J.; Fisch, Nathaniel J.

    2011-10-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

  14. Metrics For Comparing Plasma Mass Filters

    SciTech Connect

    Abraham J. Fetterman and Nathaniel J. Fisch

    2012-08-15

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter. __________________________________________________

  15. Metrics for comparing plasma mass filters

    NASA Astrophysics Data System (ADS)

    Fetterman, Abraham J.; Fisch, Nathaniel J.

    2011-10-01

    High-throughput mass separation of nuclear waste may be useful for optimal storage, disposal, or environmental remediation. The most dangerous part of nuclear waste is the fission product, which produces most of the heat and medium-term radiation. Plasmas are well-suited to separating nuclear waste because they can separate many different species in a single step. A number of plasma devices have been designed for such mass separation, but there has been no standardized comparison between these devices. We define a standard metric, the separative power per unit volume, and derive it for three different plasma mass filters: the plasma centrifuge, Ohkawa filter, and the magnetic centrifugal mass filter.

  16. Optimized multiple-quantum filter for robust selective excitation of metabolite signals

    NASA Astrophysics Data System (ADS)

    Holbach, Mirjam; Lambert, Jörg; Suter, Dieter

    2014-06-01

    The selective excitation of metabolite signals in vivo requires the use of specially adapted pulse techniques, in particular when the signals are weak and the resonances overlap with those of unwanted molecules. Several pulse sequences have been proposed for this spectral editing task. However, their performance is strongly degraded by unavoidable experimental imperfections. Here, we show that optimal control theory can be used to generate pulses and sequences that perform almost ideally over a range of rf field strengths and frequency offsets that can be chosen according to the specifics of the spectrometer or scanner being used. We demonstrate this scheme by applying it to lactate editing. In addition to the robust excitation, we also have designed the pulses to minimize the signal of unwanted molecular species.

  17. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  18. Retrieval of small-relief marsh morphology from Terrestrial Laser Scanner, optimal spatial filtering, and laser return intensity

    NASA Astrophysics Data System (ADS)

    Guarnieri, A.; Vettore, A.; Pirotti, F.; Menenti, M.; Marani, M.

    2009-12-01

    Marshes are ubiquitous landforms in estuaries and lagoons, where important hydrological, morphological and ecological processes take place. These areas attenuate sea action on the coast and act as sediment trapping zones. Due to their ecosystem functions and effects on coastal stabilization, marshes are crucial structures in tidal environments, both biologically and geomorphologically, and are fundamental elements in wetland restoration and coastal realignment schemes. The spatially-distributed study of the geomorphology of intertidal areas using remotely-sensed digital terrain models remains problematic, owing to their small relief, often of the order of a few tens of centimetres, and to the presence of short and dense vegetation, which strongly reduces the number of resolvable ground returns. Here, we use high-resolution Terrestrial Laser Scanning (˜ 200 returns/m 2) to retrieve a high-resolution and high-accuracy Digital Terrain Model within a tidal marsh in the Venice lagoon. To this aim we apply a new filtering scheme to Terrestrial Laser Scanner data which selects the lowest values within moving windows, whose optimal size is determined with the aid of a limited number of ancillary Differential GPS data in order to maximize resolution while ensuring the identification of true ground returns. The accuracy of the filtered data is further refined using classifications of the intensity of the returns to extract additional information on the surface (ground or canopy) originating the returning laser beam. Validations against about 200 reference Differential GPS ground elevation observations indicates that the best separation of canopy and ground signals is obtained using a low-pass filter with window size of the order of 1 m and the maximum likelihood classifier to further refine the detection of ground returns. In this case the average estimation error is about 1 cm (slight overestimation of ground elevation), while its standard deviation is about 3 cm. Our

  19. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  20. An optimal modeling of multidimensional wave digital filtering network for free vibration analysis of symmetrically laminated composite FSDT plates

    NASA Astrophysics Data System (ADS)

    Tseng, Chien-Hsun

    2015-02-01

    The technique of multidimensional wave digital filtering (MDWDF) that builds on traveling wave formulation of lumped electrical elements, is successfully implemented on the study of dynamic responses of symmetrically laminated composite plate based on the first order shear deformation theory. The philosophy applied for the first time in this laminate mechanics relies on integration of certain principles involving modeling and simulation, circuit theory, and MD digital signal processing to provide a great variety of outstanding features. Especially benefited by the conservation of passivity gives rise to a nonlinear programming problem (NLP) for the issue of numerical stability of a MD discrete system. Adopting the augmented Lagrangian genetic algorithm, an effective optimization technique for rapidly achieving solution spaces of NLP models, numerical stability of the MDWDF network is well received at all time by the satisfaction of the Courant-Friedrichs-Levy stability criterion with the least restriction. In particular, optimum of the NLP has led to the optimality of the network in terms of effectively and accurately predicting the desired fundamental frequency, and thus to give an insight into the robustness of the network by looking at the distribution of system energies. To further explore the application of the optimum network, more numerical examples are engaged in efforts to achieve a qualitative understanding of the behavior of the laminar system. These are carried out by investigating various effects based on different stacking sequences, stiffness and span-to-thickness ratios, mode shapes and boundary conditions. Results are scrupulously validated by cross referencing with early published works, which show that the present method is in excellent agreement with other numerical and analytical methods.

  1. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  2. Technical note: Optimization for improved tube-loading efficiency in the dual-energy computed tomography coupled with balanced filter method

    SciTech Connect

    Saito, Masatoshi

    2010-08-15

    Purpose: This article describes the spectral optimization of dual-energy computed tomography using balanced filters (bf-DECT) to reduce the tube loadings and dose by dedicating to the acquisition of electron density information, which is essential for treatment planning in radiotherapy. Methods: For the spectral optimization of bf-DECT, the author calculated the beam-hardening error and air kerma required to achieve a desired noise level in an electron density image of a 50-cm-diameter cylindrical water phantom. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. Results: The optimal combination of tube voltages was 80 kV/140 kV in conjunction with Tb/Hf and Bi/Mo filter pairs; this combination agrees with that obtained in a previous study [M. Saito, ''Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method,'' Med. Phys. 36, 3631-3642 (2009)], although the thicknesses of the filters that yielded a minimum tube output were slightly different from those obtained in the previous study. The resultant tube loading of a low-energy scan of the present bf-DECT significantly decreased from 57.5 to 4.5 times that of a high-energy scan for conventional DECT. Furthermore, the air kerma of bf-DECT could be reduced to less than that of conventional DECT, while obtaining the same figure of merit for the measurement of electron density and effective atomic number. Conclusions: The tube-loading and dose efficiencies of bf-DECT were considerably improved by sacrificing the quality of the noise level in the images of effective atomic number.

  3. Optimization of two-step bioleaching of spent petroleum refinery catalyst by Acidithiobacillus thiooxidans using response surface methodology.

    PubMed

    Srichandan, Haragobinda; Pathak, Ashish; Kim, Dong Jin; Lee, Seoung-Won

    2014-01-01

    A central composite design (CCD) combined with response surface methodology (RSM) was employed for maximizing bioleaching yields of metals (Al, Mo, Ni, and V) from as-received spent refinery catalyst using Acidithiobacillus thiooxidans. Three independent variables, namely initial pH, sulfur concentration, and pulp density were investigated. The pH was found to be the most influential parameter with leaching yields of metals varying inversely with pH. Analysis of variance (ANOVA) of the quadratic model indicated that the predicted values were in good agreement with experimental data. Under optimized conditions of 1.0% pulp density, 1.5% sulfur and pH 1.5, about 93% Ni, 44% Al, 34% Mo, and 94% V was leached from the spent refinery catalyst. Among all the metals, V had the highest maximum rate of leaching (Vmax) according to the Michaelis-Menten equation. The results of the study suggested that two-step bioleaching is efficient in leaching of metals from spent refinery catalyst. Moreover, the process can be conducted with as received spent refinery catalyst, thus making the process cost effective for large-scale applications.

  4. Nonlinear Attitude Filtering Methods

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Crassidis, John L.; Cheng, Yang

    2005-01-01

    This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.

  5. Predicting the evolution of the extensional step-over in the San Pablo bay area with work optimization

    NASA Astrophysics Data System (ADS)

    McBeck, J.; Cooke, M. L.; Madden, E. H.

    2015-12-01

    Field data and numerical modeling indicate that the releasing stepover in the San Pablo Bay area, between the Hayward and Rodgers Creek fault, presently seems to lack a strike-slip transfer fault. Analysis of gravity data suggests that only one high-angle normal fault may exist within the step, near the northern tip of the Hayward fault. To investigate a possible evolution of this fault system, we simulate this stepover with the numerical modeling tool Growth by Optimization of Work (GROW). GROW predicts the evolution of a fracture network by analyzing the gain in efficiency, or change in external work, produced by fracture propagation and interaction. We load the San Pablo Bay stepover models with dextral velocity and normal compression that reflects a range of seismogenic depths. The GROW analysis with overlapping starting fault segments separated by 5 km predicts that the Hayward and Rodgers Creek faults propagate toward one another following a gently curved path. The curved path of the fault segment representing the Hayward fault disagrees with the observed planar fault trace, which suggests that this fault may precede the southern propagation of the Rogers Creek fault. We explore various starting configurations that represent the potential geometry at the onset of interaction between the faults, such as different lengths of the two branches of the southern Rogers Creek fault. Throughout the development of this stepover, we analyze the evolution of external work, and change in external work (ΔWext) due to fault growth, interaction and linkage. Additionally, we use the distribution ΔWext at each increment of fault growth to produce probability density functions (PDFs). These PDFs describe fault propagation path forecasts that are defined by 90% confidence envelopes. The propagation forecasts facilitate analysis of the impact of anisotropy and heterogeneity on propagation path.

  6. Reducing radiation dose by application of optimized low-energy x-ray filters to K-edge imaging with a photon counting detector.

    PubMed

    Choi, Yu-Na; Lee, Seungwan; Kim, Hee-Joung

    2016-01-21

    K-edge imaging with photon counting x-ray detectors (PCXDs) can improve image quality compared with conventional energy integrating detectors. However, low-energy x-ray photons below the K-edge absorption energy of a target material do not contribute to image formation in the K-edge imaging and are likely to be completely absorbed by an object. In this study, we applied x-ray filters to the K-edge imaging with a PCXD based on cadmium zinc telluride for reducing radiation dose induced by low-energy x-ray photons. We used aluminum (Al) filters with different thicknesses as the low-energy x-ray filters and implemented the iodine K-edge imaging with an energy bin of 34-48 keV at the tube voltages of 50, 70 and 90 kVp. The effects of the low-energy x-ray filters on the K-edge imaging were investigated with respect to signal-difference-to-noise ratio (SDNR), entrance surface air kerma (ESAK) and figure of merit (FOM). The highest value of SDNR was observed in the K-edge imaging with a 2 mm Al filter, and the SDNR decreased as a function of the filter thicknesses. Compared to the K-edge imaging with a 2 mm Al filter, the ESAK was reduced by 66%, 48% and 39% in the K-edge imaging with a 12 mm Al filter for 50 kVp, 70 kVp and 90 kVp, respectively. The FOM values, which took into account the ESAK and SDNR, were maximized for 8, 6 to 8 and 4 mm Al filters at 50 kVp, 70 kVp and 90 kVp, respectively. We concluded that the use of an optimal low-energy filter thickness, which was determined by maximizing the FOM, could significantly reduce radiation dose while maintaining image quality in the K-edge imaging with the PCXD.

  7. Reducing radiation dose by application of optimized low-energy x-ray filters to K-edge imaging with a photon counting detector

    NASA Astrophysics Data System (ADS)

    Choi, Yu-Na; Lee, Seungwan; Kim, Hee-Joung

    2016-01-01

    K-edge imaging with photon counting x-ray detectors (PCXDs) can improve image quality compared with conventional energy integrating detectors. However, low-energy x-ray photons below the K-edge absorption energy of a target material do not contribute to image formation in the K-edge imaging and are likely to be completely absorbed by an object. In this study, we applied x-ray filters to the K-edge imaging with a PCXD based on cadmium zinc telluride for reducing radiation dose induced by low-energy x-ray photons. We used aluminum (Al) filters with different thicknesses as the low-energy x-ray filters and implemented the iodine K-edge imaging with an energy bin of 34-48 keV at the tube voltages of 50, 70 and 90 kVp. The effects of the low-energy x-ray filters on the K-edge imaging were investigated with respect to signal-difference-to-noise ratio (SDNR), entrance surface air kerma (ESAK) and figure of merit (FOM). The highest value of SDNR was observed in the K-edge imaging with a 2 mm Al filter, and the SDNR decreased as a function of the filter thicknesses. Compared to the K-edge imaging with a 2 mm Al filter, the ESAK was reduced by 66%, 48% and 39% in the K-edge imaging with a 12 mm Al filter for 50 kVp, 70 kVp and 90 kVp, respectively. The FOM values, which took into account the ESAK and SDNR, were maximized for 8, 6 to 8 and 4 mm Al filters at 50 kVp, 70 kVp and 90 kVp, respectively. We concluded that the use of an optimal low-energy filter thickness, which was determined by maximizing the FOM, could significantly reduce radiation dose while maintaining image quality in the K-edge imaging with the PCXD.

  8. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  9. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  10. Thickness optimization of drilling fluid filter cakes for cement slurry filtrate control and long-term zonal isolation

    SciTech Connect

    Griffith, J.E.; Osisanya, S.

    1995-12-31

    In this paper, the long-term isolation characteristics of two typical filter-cake systems in a gas or water environment are investigated. The test models were designed to measure the sealing capability of a premium cement and filter-cake system used to prevent hydraulic communication at a permeable-nonpermeable boundary. The test models represented the area of a sandstone/shale layer in an actual well. In a real well, sandstone is a water- or gas-bearing formation, and sealing the annulus at the shale formation would prevent hydraulic communication to an upper productive zone. To simulate these conditions, the test models remained in a gas or water environment at either 80 or 150 F for periods of 3, 4, 30, and 90 days before the hydraulic isolation measurements were conducted. Models without filter cake, consisting of 100% cement, were tested for zonal isolation with the filter-cake models to provide reference points. These results show how critical filter-cake removal is to the long-term sealing of the cemented annulus. Results indicate that complete removal of the filter cake provides the greatest resistance to fluid communication in most of the cases studied.

  11. Optimizing mini-ridge filter thickness to reduce proton treatment times in a spot-scanning synchrotron system

    SciTech Connect

    Courneyea, Lorraine; Beltran, Chris Tseung, Hok Seum Wan Chan; Yu, Juan; Herman, Michael G.

    2014-06-15

    Purpose: Study the contributors to treatment time as a function of Mini-Ridge Filter (MRF) thickness to determine the optimal choice for breath-hold treatment of lung tumors in a synchrotron-based spot-scanning proton machine. Methods: Five different spot-scanning nozzles were simulated in TOPAS: four with MRFs of varying maximal thicknesses (6.15–24.6 mm) and one with no MRF. The MRFs were designed with ridges aligned along orthogonal directions transverse to the beam, with the number of ridges (4–16) increasing with MRF thickness. The material thickness given by these ridges approximately followed a Gaussian distribution. Using these simulations, Monte Carlo data were generated for treatment planning commissioning. For each nozzle, standard and stereotactic (SR) lung phantom treatment plans were created and assessed for delivery time and plan quality. Results: Use of a MRF resulted in a reduction of the number of energy layers needed in treatment plans, decreasing the number of synchrotron spills needed and hence the treatment time. For standard plans, the treatment time per field without a MRF was 67.0 ± 0.1 s, whereas three of the four MRF plans had treatment times of less than 20 s per field; considered sufficiently low for a single breath-hold. For SR plans, the shortest treatment time achieved was 57.7 ± 1.9 s per field, compared to 95.5 ± 0.5 s without a MRF. There were diminishing gains in time reduction as the MRF thickness increased. Dose uniformity of the PTV was comparable across all plans; however, when the plans were normalized to have the same coverage, dose conformality decreased with MRF thickness, as measured by the lung V20%. Conclusions: Single breath-hold treatment times for plans with standard fractionation can be achieved through the use of a MRF, making this a viable option for motion mitigation in lung tumors. For stereotactic plans, while a MRF can reduce treatment times, multiple breath-holds would still be necessary due to the

  12. The Effects of Negative Differential Resistance, Bipolar Spin-Filtering, and Spin-Rectifying on Step-Like Zigzag Graphene Nanoribbons Heterojunctions with Single or Double Edge-Saturated Hydrogen

    NASA Astrophysics Data System (ADS)

    Wang, Lihua; Zhao, Jianguo; Ding, Bingjun; Guo, Yong

    2016-09-01

    In this study, we investigated the spin-resolved transport aspects of step-like zigzag graphene ribbons (ZGNRs) with single or double edge-saturated hydrogen using a method that combined the density functional theory with the nonequilibrium Green's function method under the local spin density approximation. We found that, when the ZGNR-based heterojunctions were in a parallel or antiparallel layout, negative differential resistance, the maximum bipolar spin-filtering, and spin-rectifying effects occurred synchronously except for the case of spin-down electrons in the parallel magnetic layouts. Interestingly, these spin-resolved transport properties were almost unaffected by altering the widths of the two component ribbons. Therefore, step-like ZGNR heterojunctions are promising for use in designing high-performance multifunctional spintronic devices.

  13. Pixelated filters for spatial imaging

    NASA Astrophysics Data System (ADS)

    Mathieu, Karine; Lequime, Michel; Lumeau, Julien; Abel-Tiberini, Laetitia; Savin De Larclause, Isabelle; Berthon, Jacques

    2015-10-01

    Small satellites are often used by spatial agencies to meet scientific spatial mission requirements. Their payloads are composed of various instruments collecting an increasing amount of data, as well as respecting the growing constraints relative to volume and mass; So small-sized integrated camera have taken a favored place among these instruments. To ensure scene specific color information sensing, pixelated filters seem to be more attractive than filter wheels. The work presented here, in collaboration with Institut Fresnel, deals with the manufacturing of this kind of component, based on thin film technologies and photolithography processes. CCD detectors with a pixel pitch about 30 μm were considered. In the configuration where the matrix filters are positioned the closest to the detector, the matrix filters are composed of 2x2 macro pixels (e.g. 4 filters). These 4 filters have a bandwidth about 40 nm and are respectively centered at 550, 700, 770 and 840 nm with a specific rejection rate defined on the visible spectral range [500 - 900 nm]. After an intense design step, 4 thin-film structures have been elaborated with a maximum thickness of 5 μm. A run of tests has allowed us to choose the optimal micro-structuration parameters. The 100x100 matrix filters prototypes have been successfully manufactured with lift-off and ion assisted deposition processes. High spatial and spectral characterization, with a dedicated metrology bench, showed that initial specifications and simulations were globally met. These excellent performances knock down the technological barriers for high-end integrated specific multi spectral imaging.

  14. Multilayer filter design with high K materials

    NASA Astrophysics Data System (ADS)

    Curtis, Nathaniel, II

    A novel approach to filter design is presented. A high-K multilayer coupled line filter is designed for optimal performance within a dielectric resonator of rectangular cross section. The multilayer filter is shown to have a performance comparable to its planar counterpart as well as the Lange coupler while maintaining the design advantages that come with the multilayer approach to filter design such as increased flexibility in managing parameter constraints. The performance of the rectangular cross sectioned resonator in terms of modal response and resonant frequency has been evaluated through mathematical derivation and simulation. The reader will find the step by step process to designing the resonant structure as well as a MATLAB script that will graphically display the effect changing various parameters may have on resonator size to assist in the design analysis. The resonator has been designed to provide a finite package in terms of space and performance so that it may house the multilayer filter on a printed circuit board for ease of system implementation. The proposed design with analysis will prove useful for all multilayer coupled line filter types that may take advantage of the uniform environment provided by the finite packaging of the dielectric resonator. As with any microwave system, considerable effort must be put forth to maintain signal integrity throughout the delivery process from the signal input to reception at the output. As a result a large amount of effort and research has gone into answering the question of how to efficiently feed both a dielectric resonator filter of rectangular cross section as well as a coupled line filter that would be embedded within the resonators confines. Several methods for feeding have been explored and reported on. Of the feeding methods reported on the most feasible design includes a unique microstrip delivery to the embedded multilayer filter as pictured here.* *Please refer to dissertation for diagram.

  15. Optimal low noise phase-only and binary phase-only optical correlation filters for threshold detectors

    NASA Astrophysics Data System (ADS)

    Kallman, Robert R.

    1986-12-01

    Phase-only (PO) and binary phase only (BPO) versions of recently developed Synthetic Discriminant Filters, SDFs, (Kallman, 1986) are discussed which are potentially useful for threshold optical correlation detectors. A formulation of the performance or SNR of a filter against a training set is first presented which takes into account the POF or BPOF, unlike the SDF, being unable to control the actual size of the recognition spike of the output correlation plane when a valid target is centered in the filter input plane. Numerical tests of the present recipes for POFs and BPOFs have been carried out on four SDFs made from tank imagery, and the SNR for 12 POFs and 24 BPOFs were computed.

  16. Optimization of alternate-strand triple helix formation at the 5'CpG3' and 5'GpC3' junction steps.

    PubMed

    Marchand, C; Sun, J S; Bailly, C; Waring, M J; Garestier, T; Hélène, C

    1998-09-22

    Oligonucleotide-directed triple helix formation normally requires a long tract of oligopyrimidine.oligopurine sequence. This limitation can be partially overcome by alternate-strand triple helix (or switch triple helix) formation which enables recognition of alternating oligopurine/oligopyrimidine sequences. The present work is devoted to the optimization of switch triple helix formation at the 5'CpG3' and 5'GpC3' junction steps by combination of base triplets in Hoogsteen and in reverse Hoogsteen configurations. Rational design by molecular mechanics was first carried out to study the geometrical constraints at different junction steps and to propose a "switch code" which would optimize the interactions at junctions. These predictions were further checked and validated experimentally by gel retardation and DNase I footprinting assays. It was shown that the choice of an appropriate linker nucleotide in the switching third strand plays an important role in the interaction between oligonucleotides and alternating oligopurine/oligopyrimidine target sequences at different junctions: (i) the addition of a cytosine at the junction level in the oligonucleotide optimizes the crossover at the 5'CpG3' junction, whereas (ii) the best crossover at the 5'GpC3' junction step is achieved without any additional nucleotide. These results provide a useful guideline to extend double-stranded DNA sequence recognition by switch triple helix formation.

  17. Development and implementation of optimal filtering in a Virtex FPGA for the upgrade of the ATLAS LAr calorimeter readout

    NASA Astrophysics Data System (ADS)

    Stärz, S.

    2012-12-01

    In the context of upgraded read-out systems for the Liquid-Argon Calorimeters of the ATLAS detector, modified front-end, back-end and trigger electronics are foreseen for operation in the high-luminosity phase of the LHC. Accuracy and efficiency of the energy measurement and reliability of pile-up suppression are substantial when processing the detector raw-data in real-time. Several digital filter algorithms are investigated for their performance to extract energies from incoming detector signals and for the needs of the future trigger system. The implementation of fast, resource economizing, parameter driven filter algorithms in a modern Virtex FPGA is presented.

  18. Development of an optimal automatic control law and filter algorithm for steep glideslope capture and glideslope tracking

    NASA Technical Reports Server (NTRS)

    Halyo, N.

    1976-01-01

    A digital automatic control law to capture a steep glideslope and track the glideslope to a specified altitude is developed for the longitudinal/vertical dynamics of a CTOL aircraft using modern estimation and control techniques. The control law uses a constant gain Kalman filter to process guidance information from the microwave landing system, and acceleration from body mounted accelerometer data. The filter outputs navigation data and wind velocity estimates which are used in controlling the aircraft. Results from a digital simulation of the aircraft dynamics and the control law are presented for various wind conditions.

  19. Filter construction and design.

    PubMed

    Jornitz, Maik W

    2006-01-01

    Sterilizing and pre-filters are manufactured in different formats and designs. The criteria for the specific designs are set by the application and the specifications of the filter user. The optimal filter unit or even system requires evaluation, such as flow rate, throughput, unspecific adsorption, steam sterilizability and chemical compatibility. These parameters are commonly tested within a qualification phase, which ensures that an optimal filter design and combination finds its use. If such design investigations are neglected it could be costly in the process scale. PMID:16570863

  20. Study on Optimization Method of Quantization Step and the Image Quality Evaluation for Medical Ultrasonic Echo Image Compression by Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Khieovongphachanh, Vimontha; Hamamoto, Kazuhiko; Kondo, Shozo

    In this paper, we investigate optimized quantization method in JPEG2000 application for medical ultrasonic echo images. JPEG2000 has been issued as the new standard for image compression technique, which is based on Wavelet Transform (WT) and JPEG2000 incorporated into DICOM (Digital Imaging and Communications in Medicine). There are two quantization methods. One is the scalar derived quantization (SDQ), which is usually used in standard JPEG2000. The other is the scalar expounded quantization (SEQ), which can be optimized by user. Therefore, this paper is an optimization of quantization step, which is determined by Genetic Algorithm (GA). Then, the results are compared with SDQ and SEQ determined by arithmetic average method. The purpose of this paper is to improve image quality and compression ratio for medical ultrasonic echo images. The image quality is evaluated by objective assessment, PSNR (Peak Signal to Noise Ratio) and subjective assessment is evaluated by ultrasonographers from Tokai University Hospital and Tokai University Hachioji Hospital. The results show that SEQ determined by GA provides better image quality than SDQ and SEQ determined by arithmetic average method. Additionally, three optimization methods of quantization step apply to thin wire target image for analysis of point spread function.

  1. Optimal cut-off points of fasting plasma glucose for two-step strategy in estimating prevalence and screening undiagnosed diabetes and pre-diabetes in Harbin, China.

    PubMed

    Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition.

  2. Optimal cut-off points of fasting plasma glucose for two-step strategy in estimating prevalence and screening undiagnosed diabetes and pre-diabetes in Harbin, China.

    PubMed

    Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  3. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  4. Estimating model parameters for an impact-produced shock-wave simulation: Optimal use of partial data with the extended Kalman filter

    SciTech Connect

    Kao, Jim . E-mail: kao@lanl.gov; Flicker, Dawn; Ide, Kayo; Ghil, Michael

    2006-05-20

    This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from a single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.

  5. Using adaptive genetic algorithms in the design of morphological filters in textural image processing

    NASA Astrophysics Data System (ADS)

    Li, Wei; Haese-Coat, Veronique; Ronsin, Joseph

    1996-03-01

    An adaptive GA scheme is adopted for the optimal morphological filter design problem. The adaptive crossover and mutation rate which make the GA avoid premature and at the same time assure convergence of the program are successfully used in optimal morphological filter design procedure. In the string coding step, each string (chromosome) is composed of a structuring element coding chain concatenated with a filter sequence coding chain. In decoding step, each string is divided into 3 chains which then are decoded respectively into one structuring element with a size inferior to 5 by 5 and two concatenating morphological filter operators. The fitness function in GA is based on the mean-square-error (MSE) criterion. In string selection step, a stochastic tournament procedure is used to replace the simple roulette wheel program in order to accelerate the convergence. The final convergence of our algorithm is reached by a two step converging strategy. In presented applications of noise removal from texture images, it is found that with the optimized morphological filter sequences, the obtained MSE values are smaller than those using corresponding non-adaptive morphological filters, and the optimized shapes and orientations of structuring elements take approximately the same shapes and orientations as those of the image textons.

  6. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    SciTech Connect

    Omelyan, Igor E-mail: omelyan@icmp.lviv.ua; Kovalenko, Andriy

    2013-12-28

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  7. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    NASA Astrophysics Data System (ADS)

    Omelyan, Igor; Kovalenko, Andriy

    2013-12-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  8. An optimized and low-cost FPGA-based DNA sequence alignment--a step towards personal genomics.

    PubMed

    Shah, Hurmat Ali; Hasan, Laiq; Ahmad, Nasir

    2013-01-01

    DNA sequence alignment is a cardinal process in computational biology but also is much expensive computationally when performing through traditional computational platforms like CPU. Of many off the shelf platforms explored for speeding up the computation process, FPGA stands as the best candidate due to its performance per dollar spent and performance per watt. These two advantages make FPGA as the most appropriate choice for realizing the aim of personal genomics. The previous implementation of DNA sequence alignment did not take into consideration the price of the device on which optimization was performed. This paper presents optimization over previous FPGA implementation that increases the overall speed-up achieved as well as the price incurred by the platform that was optimized. The optimizations are (1) The array of processing elements is made to run on change in input value and not on clock, so eliminating the need for tight clock synchronization, (2) the implementation is unrestrained by the size of the sequences to be aligned, (3) the waiting time required for the sequences to load to FPGA is reduced to the minimum possible and (4) an efficient method is devised to store the output matrix that make possible to save the diagonal elements to be used in next pass, in parallel with the computation of output matrix. Implemented on Spartan3 FPGA, this implementation achieved 20 times performance improvement in terms of CUPS over GPP implementation.

  9. An optimized and low-cost FPGA-based DNA sequence alignment--a step towards personal genomics.

    PubMed

    Shah, Hurmat Ali; Hasan, Laiq; Ahmad, Nasir

    2013-01-01

    DNA sequence alignment is a cardinal process in computational biology but also is much expensive computationally when performing through traditional computational platforms like CPU. Of many off the shelf platforms explored for speeding up the computation process, FPGA stands as the best candidate due to its performance per dollar spent and performance per watt. These two advantages make FPGA as the most appropriate choice for realizing the aim of personal genomics. The previous implementation of DNA sequence alignment did not take into consideration the price of the device on which optimization was performed. This paper presents optimization over previous FPGA implementation that increases the overall speed-up achieved as well as the price incurred by the platform that was optimized. The optimizations are (1) The array of processing elements is made to run on change in input value and not on clock, so eliminating the need for tight clock synchronization, (2) the implementation is unrestrained by the size of the sequences to be aligned, (3) the waiting time required for the sequences to load to FPGA is reduced to the minimum possible and (4) an efficient method is devised to store the output matrix that make possible to save the diagonal elements to be used in next pass, in parallel with the computation of output matrix. Implemented on Spartan3 FPGA, this implementation achieved 20 times performance improvement in terms of CUPS over GPP implementation. PMID:24110283

  10. Orthogonal array design for the optimization of ionic liquid-based dispersive liquid-liquid microextraction of benzophenone-type UV filters.

    PubMed

    Ye, Lei; Liu, Juanjuan; Yang, Xin; Peng, Yan; Xu, Li

    2011-03-01

    In the present study, dispersive liquid-liquid microextraction (DLLME) using an ionic liquid (IL) as the extractant was successfully developed to extract four benzophenone-type UV filters from the different water matrices. Orthogonal array experimental design (OAD), based on five factors and four levels (L(16)(4(5))), was employed to optimize IL-dispersive liquid-liquid microextraction procedure. The five factors included pH of sample solution, the volume of IL and methanol addition, extraction time and the amount of salt added. The optimal extraction condition was as follows. Sample solution was at a pH of 2.63 in the presence of 60 mg/mL sodium chloride; 30 μL IL and 15 μL methanol were used as extractant and disperser solvent, respectively; extraction was achieved by vortexing for 4 min. Using high-performance liquid chromatography-UV analysis, the limits of detection of the target analytes ranged between 1.9 and 6.4 ng/mL. The linear ranges were between 10 or 20 ng/mL and 1000 ng/mL. This procedure afforded a convenient, fast and cost-saving operation with high extraction efficiency for the model analytes. Spiked waters from two rivers and one lake were examined by the developed method. For the swimming pool water, the standard addition method was employed to determine the actual concentrations of the UV filters. PMID:21290603

  11. SU-E-T-23: A Novel Two-Step Optimization Scheme for Tandem and Ovoid (T and O) HDR Brachytherapy Treatment for Locally Advanced Cervical Cancer

    SciTech Connect

    Sharma, M; Todor, D; Fields, E

    2014-06-01

    Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 4–5 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with “excellent” CTV coverage, average D90 of 96% (range 91–102), sigmoid D2cc was reduced on average by 37% (range 16–73), bladder by 28% (range 20–47) and rectum by 27% (range 15–45). Similar reductions were obtained on plans with “good” coverage, with an average D90 of 93% (range 90–99). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 4–5 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.

  12. Unconditionally energy stable time stepping scheme for Cahn-Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    NASA Astrophysics Data System (ADS)

    Tavakoli, Rouhollah

    2016-01-01

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn-Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results.

  13. Design of an optimal wave-vector filter for enhancing the resolution of reconstructed source field by near-field acoustical holography (NAH)

    PubMed

    Kim; Ih

    2000-06-01

    In near-field acoustical holography using the boundary element method, the reconstructed field often diverges due to the presence of small measurement errors. In order to handle this instability in the inverse problem, the reconstruction process should include some form of regularization for enhancing the resolution of source images. The usual method of regularization has been the truncation of wave vectors associated with small singular values, although the determination of an optimal truncation order is difficult. In this article, an iterative inverse solution technique is suggested in which the mean-square error prediction is used. A statistical estimation of the minimum mean-square error between measured pressures and the model solution is required for yielding the optimal number of iterations. The continuous curve of an optimal wave-vector filter is designed, for suppressing the high-order modes that can produce large reconstruction errors. Experimental results from a baffled radiator reveal that the reconstruction errors can be reduced by this form of regularization, by at least 48% compared to those without any regularization. In comparison to results using the optimal truncation method of regularization, the new scheme is shown to give further reductions of truncation error of between 7% and 39%, for the example in this article. PMID:10875374

  14. Edge-Aware BMA Filters.

    PubMed

    Guang Deng

    2016-01-01

    There has been continuous research in edge-aware filters which have found many applications in computer vision and image processing. In this paper, we propose a principled-approach for the development of edge-aware filters. The proposed approach is based on two well-established principles: 1) optimal parameter estimation and 2) Bayesian model averaging (BMA). Using this approach, we formulate the problem of filtering a pixel in a local pixel patch as an optimal estimation problem. Since a pixel belongs to multiple local patches, there are multiple estimates of the same pixel. We combine these estimates into a final estimate using BMA. We demonstrate the versatility of this approach by developing a family of BMA filters based on different settings of cost functions and log-likelihood and log-prior functions. We also present a new interpretation of the guided filter and develop a BMA guided filter which includes the guided filter as a special case. We show that BMA filters can produce similar smoothing results as those of the state-of-the-art edge-aware filters. Two BMA filters are computationally as efficient as the guided filter which is one of the fastest edge-aware filters. We also demonstrate that the BMA guided filter is better than the guided filter in preserving sharp edges. A new feature of the BMA guided filter is that the filtered image is similar to that produced by a clustering process.

  15. An IIR median hybrid filter

    NASA Technical Reports Server (NTRS)

    Bauer, Peter H.; Sartori, Michael A.; Bryden, Timothy M.

    1992-01-01

    A new class of nonlinear filters, the so-called class of multidirectional infinite impulse response median hybrid filters, is presented and analyzed. The input signal is processed twice using a linear shift-invariant infinite impulse response filtering module: once with normal causality and a second time with inverted causality. The final output of the MIMH filter is the median of the two-directional outputs and the original input signal. Thus, the MIMH filter is a concatenation of linear filtering and nonlinear filtering (a median filtering module). Because of this unique scheme, the MIMH filter possesses many desirable properties which are both proven and analyzed (including impulse removal, step preservation, and noise suppression). A comparison to other existing median type filters is also provided.

  16. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization

    PubMed Central

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742

  17. Security: Step by Step

    ERIC Educational Resources Information Center

    Svetcov, Eric

    2005-01-01

    This article provides a list of the essential steps to keeping a school's or district's network safe and sound. It describes how to establish a security architecture and approach that will continually evolve as the threat environment changes over time. The article discusses the methodology for implementing this approach and then discusses the…

  18. Systematic Biological Filter Design with a Desired I/O Filtering Response Based on Promoter-RBS Libraries.

    PubMed

    Hsu, Chih-Yuan; Pan, Zhen-Ming; Hu, Rei-Hsing; Chang, Chih-Chun; Cheng, Hsiao-Chun; Lin, Che; Chen, Bor-Sen

    2015-01-01

    In this study, robust biological filters with an external control to match a desired input/output (I/O) filtering response are engineered based on the well-characterized promoter-RBS libraries and a cascade gene circuit topology. In the field of synthetic biology, the biological filter system serves as a powerful detector or sensor to sense different molecular signals and produces a specific output response only if the concentration of the input molecular signal is higher or lower than a specified threshold. The proposed systematic design method of robust biological filters is summarized into three steps. Firstly, several well-characterized promoter-RBS libraries are established for biological filter design by identifying and collecting the quantitative and qualitative characteristics of their promoter-RBS components via nonlinear parameter estimation method. Then, the topology of synthetic biological filter is decomposed into three cascade gene regulatory modules, and an appropriate promoter-RBS library is selected for each module to achieve the desired I/O specification of a biological filter. Finally, based on the proposed systematic method, a robust externally tunable biological filter is engineered by searching the promoter-RBS component libraries and a control inducer concentration library to achieve the optimal reference match for the specified I/O filtering response.

  19. Development and optimization of an analytical method for the determination of UV filters in suntan lotions based on microemulsion electrokinetic chromatography.

    PubMed

    Klampfl, Christian W; Leitner, Tanja; Hilder, Emily F

    2002-08-01

    Microemulsion electrokinetic chromatography (MEEKC) has been applied to the separation of some UV filters (Eusolex 4360, Eusolex 6300, Eusolex OCR, Eusolex 2292, Eusolex 6007, Eusolex 9020, Eusolex HMS, Eusolex OS, and Eusolex 232) commonly found in suntan lotions. The composition of the microemulsion employed was optimized with respect to the best possible separation of the selected analytes using artificial neural networks (ANNs). Two parameters namely the composition of the mixed surfactant system comprising the anionic sodium dodecyl sulfate (SDS) and neutral Brij 35 and the amount of organic modifier (2-propanol) present in the aqueous phase of the microemulsion were modeled. Using an optimized MEEKC buffer consisting of 2.25 g SDS, 0.75 g Brij 35, 6.6 g 1-butanol, 0.8 g n-octane, 17.5 g 2-propanol, and 72.1 g of 10 mM borate buffer (pH 9.2), eight target analytes could be separated in under 25 min employing a diode-array detector to segregate the overlapping signals obtained for Eusolex 9020 and Eusolex HMS. Detection limits from 0.8 to 6.0 nug/mL were obtained and the calibration plots were linear over at least one order of magnitude. The optimized method could be applied to the determination of Eusolex 6300 and Eusolex 9020 in a commercial suntan lotion.

  20. Computation of maximum gust loads in nonlinear aircraft using a new method based on the matched filter approach and numerical optimization

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Heeg, Jennifer; Perry, Boyd, III

    1990-01-01

    Time-correlated gust loads are time histories of two or more load quantities due to the same disturbance time history. Time correlation provides knowledge of the value (magnitude and sign) of one load when another is maximum. At least two analysis methods have been identified that are capable of computing maximized time-correlated gust loads for linear aircraft. Both methods solve for the unit-energy gust profile (gust velocity as a function of time) that produces the maximum load at a given location on a linear airplane. Time-correlated gust loads are obtained by re-applying this gust profile to the airplane and computing multiple simultaneous load responses. Such time histories are physically realizable and may be applied to aircraft structures. Within the past several years there has been much interest in obtaining a practical analysis method which is capable of solving the analogous problem for nonlinear aircraft. Such an analysis method has been the focus of an international committee of gust loads specialists formed by the U.S. Federal Aviation Administration and was the topic of a panel discussion at the Gust and Buffet Loads session at the 1989 SDM Conference in Mobile, Alabama. The kinds of nonlinearities common on modern transport aircraft are indicated. The Statical Discrete Gust method is capable of being, but so far has not been, applied to nonlinear aircraft. To make the method practical for nonlinear applications, a search procedure is essential. Another method is based on Matched Filter Theory and, in its current form, is applicable to linear systems only. The purpose here is to present the status of an attempt to extend the matched filter approach to nonlinear systems. The extension uses Matched Filter Theory as a starting point and then employs a constrained optimization algorithm to attack the nonlinear problem.

  1. Off-line determination of the optimal number of iterations of the robust anisotropic diffusion filter applied to denoising of brain MR images.

    PubMed

    Ferrari, Ricardo J

    2013-02-01

    Although anisotropic diffusion filters have been used extensively and with great success in medical image denoising, one limitation of this iterative approach, when used on fully automatic medical image processing schemes, is that the quality of the resulting denoised image is highly dependent on the number of iterations of the algorithm. Using many iterations may excessively blur the edges of the anatomical structures, while a few may not be enough to remove the undesirable noise. In this work, a mathematical model is proposed to automatically determine the number of iterations of the robust anisotropic diffusion filter applied to the problem of denoising three common human brain magnetic resonance (MR) images (T1-weighted, T2-weighted and proton density). The model is determined off-line by means of the maximization of the mean structural similarity index, which is used in this work as metric for quantitative assessment of the resulting processed images obtained after each iteration of the algorithm. After determining the model parameters, the optimal number of iterations of the algorithm is easily determined without requiring any extra computation time. The proposed method was tested on 3D synthetic and clinical human brain MR images and the results of qualitative and quantitative evaluation have shown its effectiveness. PMID:23124813

  2. Characterization and optimization of 2-step MOVPE growth for single-mode DFB or DBR laser diodes

    NASA Astrophysics Data System (ADS)

    Bugge, F.; Mogilatenko, A.; Zeimer, U.; Brox, O.; Neumann, W.; Erbert, G.; Weyers, M.

    2011-01-01

    We have studied the MOVPE regrowth of AlGaAs over a grating for GaAs-based laser diodes with an internal wavelength stabilisation. Growth temperature and aluminium concentration in the regrown layers considerably affect the oxygen incorporation. Structural characterisation by transmission electron microscopy of the grating after regrowth shows the formation of quaternary InGaAsP regions due to the diffusion of indium atoms from the top InGaP layer and As-P exchange processes during the heating-up procedure. Additionally, the growth over such gratings with different facets leads to self-organisation of the aluminium content in the regrown AlGaAs layer, resulting in an additional AlGaAs grating, which has to be taken into account for the estimation of the coupling coefficient. With optimized growth conditions complete distributed feedback laser structures have been grown for different emission wavelengths. At 1062 nm a very high single-frequency output power of nearly 400 mW with a slope efficiency of 0.95 W/A for a 4 μm ridge-waveguide was obtained.

  3. Steps towards verification and validation of the Fetch code for Level 2 analysis, design, and optimization of aqueous homogeneous reactors

    SciTech Connect

    Nygaard, E. T.; Pain, C. C.; Eaton, M. D.; Gomes, J. L. M. A.; Goddard, A. J. H.; Gorman, G.; Tollit, B.; Buchan, A. G.; Cooling, C. M.; Angelo, P. L.

    2012-07-01

    Babcock and Wilcox Technical Services Group (B and W) has identified aqueous homogeneous reactors (AHRs) as a technology well suited to produce the medical isotope molybdenum 99 (Mo-99). AHRs have never been specifically designed or built for this specialized purpose. However, AHRs have a proven history of being safe research reactors. In fact, in 1958, AHRs had 'a longer history of operation than any other type of research reactor using enriched fuel' and had 'experimentally demonstrated to be among the safest of all various type of research reactor now in use [1].' While AHRs have been modeled effectively using simplified 'Level 1' tools, the complex interactions between fluids, neutronics, and solid structures are important (but not necessarily safety significant). These interactions require a 'Level 2' modeling tool. Imperial College London (ICL) has developed such a tool: Finite Element Transient Criticality (FETCH). FETCH couples the radiation transport code EVENT with the computational fluid dynamics code (Fluidity), the result is a code capable of modeling sub-critical, critical, and super-critical solutions in both two-and three-dimensions. Using FETCH, ICL researchers and B and W engineers have studied many fissioning solution systems include the Tokaimura criticality accident, the Y12 accident, SILENE, TRACY, and SUPO. These modeling efforts will ultimately be incorporated into FETCH'S extensive automated verification and validation (V and V) test suite expanding FETCH'S area of applicability to include all relevant physics associated with AHRs. These efforts parallel B and W's engineering effort to design and optimize an AHR to produce Mo99. (authors)

  4. A Kalman filter for a two-dimensional shallow-water model

    NASA Technical Reports Server (NTRS)

    Parrish, D. F.; Cohn, S. E.

    1985-01-01

    A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.

  5. DC-pass filter design with notch filters superposition for CPW rectenna at low power level

    NASA Astrophysics Data System (ADS)

    Rivière, J.; Douyère, A.; Alicalapa, F.; Luk, J.-D. Lan Sun

    2016-03-01

    In this paper the challenging coplanar waveguide direct current (DC) pass filter is designed, analysed, fabricated and measured. As the ground plane and the conductive line are etched on the same plane, this technology allows the connection of series and shunt elements to the active devices without via holes through the substrate. Indeed, this study presents the first step in the optimization of a complete rectenna in coplanar waveguide (CPW) technology: key element of a radio frequency (RF) energy harvesting system. The measurement of the proposed filter shows good performance in the rejection of F0=2.45 GHz and F1=4.9 GHz. Additionally, a harmonic balance (HB) simulation of the complete rectenna is performed and shows a maximum RF-to-DC conversion efficiency of 37% with the studied DC-pass filter for an input power of 10 µW at 2.45 GHz.

  6. Optimization of an analytical methodology for the simultaneous determination of different classes of ultraviolet filters in cosmetics by pressurized liquid extraction-gas chromatography tandem mass spectrometry.

    PubMed

    Vila, Marlene; Lamas, J Pablo; Garcia-Jares, Carmen; Dagnac, Thierry; Llompart, Maria

    2015-07-31

    A methodology based on pressurized liquid extraction (PLE) followed by gas chromatography-tandem mass spectrometry (GC-MS/MS) has been developed for the simultaneous analysis of different classes of UV filters including methoxycinnamates, benzophenones, salicylates, p-aminobenzoic acid derivatives, and others in cosmetic products. The extractions were carried out in 1mL extraction cells and the amount of sample extracted was only 100mg. The experimental conditions, including the acetylation of the PLE extracts to improve GC performance, were optimized by means of experimental design tools. The two main factors affecting the PLE procedure such as solvent type and extraction temperature were assessed. The use of a matrix matched approach consisting of the addition of 10μL of diluted commercial cosmetic oil avoided matrix effects. Good linearity (R(2)>0.9970), quantitative recoveries (>80% for most of compounds, excluding three banned benzophenones) and satisfactory precision (RSD<10% in most cases) were achieved under the optimal conditions. The validated methodology was successfully applied to the analysis of different types of cosmetic formulations including sunscreens, hair products, nail polish, and lipsticks, amongst others. PMID:26091782

  7. Optimization of an analytical methodology for the simultaneous determination of different classes of ultraviolet filters in cosmetics by pressurized liquid extraction-gas chromatography tandem mass spectrometry.

    PubMed

    Vila, Marlene; Lamas, J Pablo; Garcia-Jares, Carmen; Dagnac, Thierry; Llompart, Maria

    2015-07-31

    A methodology based on pressurized liquid extraction (PLE) followed by gas chromatography-tandem mass spectrometry (GC-MS/MS) has been developed for the simultaneous analysis of different classes of UV filters including methoxycinnamates, benzophenones, salicylates, p-aminobenzoic acid derivatives, and others in cosmetic products. The extractions were carried out in 1mL extraction cells and the amount of sample extracted was only 100mg. The experimental conditions, including the acetylation of the PLE extracts to improve GC performance, were optimized by means of experimental design tools. The two main factors affecting the PLE procedure such as solvent type and extraction temperature were assessed. The use of a matrix matched approach consisting of the addition of 10μL of diluted commercial cosmetic oil avoided matrix effects. Good linearity (R(2)>0.9970), quantitative recoveries (>80% for most of compounds, excluding three banned benzophenones) and satisfactory precision (RSD<10% in most cases) were achieved under the optimal conditions. The validated methodology was successfully applied to the analysis of different types of cosmetic formulations including sunscreens, hair products, nail polish, and lipsticks, amongst others.

  8. Real-time Coupled Ensemble Kalman Filter Forecasting & Nonlinear Model Predictive Control Approach for Optimal Power Take-off of a Wave Energy Converter

    NASA Astrophysics Data System (ADS)

    Cavaglieri, Daniele; Bewley, Thomas; Previsic, Mirko

    2014-11-01

    In recent years, there has been a growing interest in renewable energy. Among all the available possibilities, wave energy conversion, due to the huge availability of energy that the ocean could provide, represents nowadays one of the most promising solutions. However, the efficiency of a wave energy converter for ocean wave energy harvesting is still far from making it competitive with more mature fields of renewable energy, such as solar and wind energy. One of the main problems is related to the difficulty to increase the power take-off through the implementation of an active controller without a precise knowledge of the oncoming wavefield. This work represents the first attempt at defining a realistic control framework for optimal power take-off of a wave energy converter where the ocean wavefield is predicted through a nonlinear Ensemble Kalman filter which assimilates data from a wave measurement device, such as a Doppler radar or a measurement buoy. Knowledge of the future wave profile is then leveraged in a nonlinear direct multiple shooting model predictive control framework allowing the online optimization of the energy absorption under motion and machinery constraints of the device.

  9. Optimization and kinetic modeling of esterification of the oil obtained from waste plum stones as a pretreatment step in biodiesel production.

    PubMed

    Kostić, Milan D; Veličković, Ana V; Joković, Nataša M; Stamenković, Olivera S; Veljković, Vlada B

    2016-02-01

    This study reports on the use of oil obtained from waste plum stones as a low-cost feedstock for biodiesel production. Because of high free fatty acid (FFA) level (15.8%), the oil was processed through the two-step process including esterification of FFA and methanolysis of the esterified oil catalyzed by H2SO4 and CaO, respectively. Esterification was optimized by response surface methodology combined with a central composite design. The second-order polynomial equation predicted the lowest acid value of 0.53mgKOH/g under the following optimal reaction conditions: the methanol:oil molar ratio of 8.5:1, the catalyst amount of 2% and the reaction temperature of 45°C. The predicted acid value agreed with the experimental acid value (0.47mgKOH/g). The kinetics of FFA esterification was described by the irreversible pseudo first-order reaction rate law. The apparent kinetic constant was correlated with the initial methanol and catalyst concentrations and reaction temperature. The activation energy of the esterification reaction slightly decreased from 13.23 to 11.55kJ/mol with increasing the catalyst concentration from 0.049 to 0.172mol/dm(3). In the second step, the esterified oil reacted with methanol (methanol:oil molar ratio of 9:1) in the presence of CaO (5% to the oil mass) at 60°C. The properties of the obtained biodiesel were within the EN 14214 standard limits. Hence, waste plum stones might be valuable raw material for obtaining fatty oil for the use as alternative feedstock in biodiesel production.

  10. Disk filter

    DOEpatents

    Bergman, W.

    1985-01-09

    An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.

  11. Disk filter

    DOEpatents

    Bergman, Werner

    1986-01-01

    An electric disk filter provides a high efficiency at high temperature. A hollow outer filter of fibrous stainless steel forms the ground electrode. A refractory filter material is placed between the outer electrode and the inner electrically isolated high voltage electrode. Air flows through the outer filter surfaces through the electrified refractory filter media and between the high voltage electrodes and is removed from a space in the high voltage electrode.

  12. Modeling and optimization of ultrasound-assisted extraction of polyphenolic compounds from Aronia melanocarpa by-products from filter-tea factory.

    PubMed

    Ramić, Milica; Vidović, Senka; Zeković, Zoran; Vladić, Jelena; Cvejin, Aleksandra; Pavlić, Branimir

    2015-03-01

    Aronia melanocarpa by-product from filter-tea factory was used for the preparation of extracts with high content of bioactive compounds. Extraction process was accelerated using sonication. Three level, three variable face-centered cubic experimental design (FCD) with response surface methodology (RSM) was used for optimization of extraction in terms of maximized yields for total phenolics (TP), flavonoids (TF), anthocyanins (MA) and proanthocyanidins (TPA) contents. Ultrasonic power (X₁: 72-216 W), temperature (X₂: 30-70 °C) and extraction time (X₃: 30-90 min) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions for investigated responses. Three-dimensional surface plots were generated from the mathematical models. The optimal conditions for ultrasound-assisted extraction of TP, TF, MA and TPA were: X₁=206.64 W, X₂=70 °C, X₃=80.1 min; X₁=210.24 W, X₂=70 °C, X₃=75 min; X₁=216 W, X₂=70 °C, X₃=45.6 min and X₁=199.44 W, X₂=70 °C, X₃=89.7 min, respectively. Generated model predicted values of the TP, TF, MA and TPA to be 15.41 mg GAE/ml, 9.86 mg CE/ml, 2.26 mg C3G/ml and 20.67 mg CE/ml, respectively. Experimental validation was performed and close agreement between experimental and predicted values was found (within 95% confidence interval).

  13. Next Step for STEP

    SciTech Connect

    Wood, Claire; Bremner, Brenda

    2013-08-09

    The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.

  14. Rapid one-step purification of single-cells encapsulated in alginate microcapsules from oil to aqueous phase using a hydrophobic filter paper: implications for single-cell experiments.

    PubMed

    Lee, Do-Hyun; Jang, Miran; Park, Je-Kyun

    2014-10-01

    By virtue of the biocompatibility and physical properties of hydrogel, picoliter-sized hydrogel microcapsules have been considered to be a biometric signature containing several features similar to that of encapsulated single cells, including phenotype, viability, and intracellular content. To maximize the experimental potential of encapsulating cells in hydrogel microcapsules, a method that enables efficient hydrogel microcapsule purification from oil is necessary. Current methods based on centrifugation for the conventional stepwise rinsing of oil, are slow and laborious and decrease the monodispersity and yield of the recovered hydrogel microcapsules. To remedy these shortcomings we have developed a simple one-step method to purify alginate microcapsules, containing a single live cell, from oil to aqueous phase. This method employs oil impregnation using a commercially available hydrophobic filter paper without multistep centrifugal purification and complicated microchannel networks. The oil-suspended alginate microcapsules encapsulating single cells from mammalian cancer cell lines (MCF-7, HepG2, and U937) and microorganisms (Chlorella vulgaris) were successfully exchanged to cell culture media by quick (~10 min) depletion of the surrounding oil phase without coalescence of neighboring microcapsules. Cell proliferation and high integrity of the microcapsules were also demonstrated by long-term incubation of microcapsules containing a single live cell. We expect that this method for the simple and rapid purification of encapsulated single-cell microcapsules will attain widespread adoption, assisting cell biologists and clinicians in the development of single-cell experiments.

  15. Water Filters

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Aquaspace H2OME Guardian Water Filter, available through Western Water International, Inc., reduces lead in water supplies. The filter is mounted on the faucet and the filter cartridge is placed in the "dead space" between sink and wall. This filter is one of several new filtration devices using the Aquaspace compound filter media, which combines company developed and NASA technology. Aquaspace filters are used in industrial, commercial, residential, and recreational environments as well as by developing nations where water is highly contaminated.

  16. Biological Filters.

    ERIC Educational Resources Information Center

    Klemetson, S. L.

    1978-01-01

    Presents the 1978 literature review of wastewater treatment. The review is concerned with biological filters, and it covers: (1) trickling filters; (2) rotating biological contractors; and (3) miscellaneous reactors. A list of 14 references is also presented. (HM)

  17. High accuracy motor controller for positioning optical filters in the CLAES Spectrometer

    NASA Technical Reports Server (NTRS)

    Thatcher, John B.

    1989-01-01

    The Etalon Drive Motor (EDM), a precision etalon control system designed for accurate positioning of etalon filters in the IR spectrometer of the Cryogenic Limb Array Etalon Spectrometer (CLAES) experiment is described. The EDM includes a brushless dc torque motor, which has an infinite resolution for setting an etalon filter to any desired angle, a four-filter etalon wheel, and an electromechanical resolver for angle information. An 18-bit control loop provides high accuracy, resolution, and stability. Dynamic computer interaction allows the user to optimize the step response. A block diagram of the motor controller is presented along with a schematic of the digital/analog converter circuit.

  18. Utilization of optimized BCR three-step sequential and dilute HCl single extraction procedures for soil-plant metal transfer predictions in contaminated lands.

    PubMed

    Kubová, Jana; Matús, Peter; Bujdos, Marek; Hagarová, Ingrid; Medved', Ján

    2008-05-30

    The prediction of soil metal phytoavailability using the chemical extractions is a conventional approach routinely used in soil testing. The adequacy of such soil tests for this purpose is commonly assessed through a comparison of extraction results with metal contents in relevant plants. In this work, the fractions of selected risk metals (Al, As, Cd, Cu, Fe, Mn, Ni, Pb, Zn) that can be taken up by various plants were obtained by optimized BCR (Community Bureau of Reference) three-step sequential extraction procedure (SEP) and by single 0.5 mol L(-1) HCl extraction. These procedures were validated using five soil and sediment reference materials (SRM 2710, SRM 2711, CRM 483, CRM 701, SRM RTH 912) and applied to significantly different acidified soils for the fractionation of studied metals. The new indicative values of Al, Cd, Cu, Fe, Mn, P, Pb and Zn fractional concentrations for these reference materials were obtained by the dilute HCl single extraction. The influence of various soil genesis, content of essential elements (Ca, Mg, K, P) and different anthropogenic sources of acidification on extraction yields of individual risk metal fractions was investigated. The concentrations of studied elements were determined by atomic spectrometry methods (flame, graphite furnace and hydride generation atomic absorption spectrometry and inductively coupled plasma optical emission spectrometry). It can be concluded that the data of extraction yields from first BCR SEP acid extractable step and soil-plant transfer coefficients can be applied to the prediction of qualitative mobility of selected risk metals in different soil systems. PMID:18585191

  19. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip

    PubMed Central

    Davison, James A

    2015-01-01

    Purpose To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. Patients and methods A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Results Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. Conclusion A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer. PMID:26355695

  20. FILTER TREATMENT

    DOEpatents

    Sutton, J.B.; Torrey, J.V.P.

    1958-08-26

    A process is described for reconditioning fused alumina filters which have become clogged by the accretion of bismuth phosphate in the filter pores, The method consists in contacting such filters with faming sulfuric acid, and maintaining such contact for a substantial period of time.

  1. Water Filters

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A compact, lightweight electrolytic water filter generates silver ions in concentrations of 50 to 100 parts per billion in the water flow system. Silver ions serve as effective bactericide/deodorizers. Ray Ward requested and received from NASA a technical information package on the Shuttle filter, and used it as basis for his own initial development, a home use filter.

  2. A new balancing three level three dimensional space vector modulation strategy for three level neutral point clamped four leg inverter based shunt active power filter controlling by nonlinear back stepping controllers.

    PubMed

    Chebabhi, Ali; Fellah, Mohammed Karim; Kessal, Abdelhalim; Benkhoris, Mohamed F

    2016-07-01

    In this paper is proposed a new balancing three-level three dimensional space vector modulation (B3L-3DSVM) strategy which uses a redundant voltage vectors to realize precise control and high-performance for a three phase three-level four-leg neutral point clamped (NPC) inverter based Shunt Active Power Filter (SAPF) for eliminate the source currents harmonics, reduce the magnitude of neutral wire current (eliminate the zero-sequence current produced by single-phase nonlinear loads), and to compensate the reactive power in the three-phase four-wire electrical networks. This strategy is proposed in order to gate switching pulses generation, dc bus voltage capacitors balancing (conserve equal voltage of the two dc bus capacitors), and to switching frequency reduced and fixed of inverter switches in same times. A Nonlinear Back Stepping Controllers (NBSC) are used for regulated the dc bus voltage capacitors and the SAPF injected currents to robustness, stabilizing the system and to improve the response and to eliminate the overshoot and undershoot of traditional PI (Proportional-Integral). Conventional three-level three dimensional space vector modulation (C3L-3DSVM) and B3L-3DSVM are calculated and compared in terms of error between the two dc bus voltage capacitors, SAPF output voltages and THDv, THDi of source currents, magnitude of source neutral wire current, and the reactive power compensation under unbalanced single phase nonlinear loads. The success, robustness, and the effectiveness of the proposed control strategies are demonstrated through simulation using Sim Power Systems and S-Function of MATLAB/SIMULINK.

  3. A new balancing three level three dimensional space vector modulation strategy for three level neutral point clamped four leg inverter based shunt active power filter controlling by nonlinear back stepping controllers.

    PubMed

    Chebabhi, Ali; Fellah, Mohammed Karim; Kessal, Abdelhalim; Benkhoris, Mohamed F

    2016-07-01

    In this paper is proposed a new balancing three-level three dimensional space vector modulation (B3L-3DSVM) strategy which uses a redundant voltage vectors to realize precise control and high-performance for a three phase three-level four-leg neutral point clamped (NPC) inverter based Shunt Active Power Filter (SAPF) for eliminate the source currents harmonics, reduce the magnitude of neutral wire current (eliminate the zero-sequence current produced by single-phase nonlinear loads), and to compensate the reactive power in the three-phase four-wire electrical networks. This strategy is proposed in order to gate switching pulses generation, dc bus voltage capacitors balancing (conserve equal voltage of the two dc bus capacitors), and to switching frequency reduced and fixed of inverter switches in same times. A Nonlinear Back Stepping Controllers (NBSC) are used for regulated the dc bus voltage capacitors and the SAPF injected currents to robustness, stabilizing the system and to improve the response and to eliminate the overshoot and undershoot of traditional PI (Proportional-Integral). Conventional three-level three dimensional space vector modulation (C3L-3DSVM) and B3L-3DSVM are calculated and compared in terms of error between the two dc bus voltage capacitors, SAPF output voltages and THDv, THDi of source currents, magnitude of source neutral wire current, and the reactive power compensation under unbalanced single phase nonlinear loads. The success, robustness, and the effectiveness of the proposed control strategies are demonstrated through simulation using Sim Power Systems and S-Function of MATLAB/SIMULINK. PMID:27018144

  4. Optimizing the flattening filter free beam selection in RapidArc®-based stereotactic body radiotherapy for Stage I lung cancer

    PubMed Central

    Lu, J-Y; Lin, Z; Lin, P-X

    2015-01-01

    Objective: To optimize the flattening filter-free (FFF) beam selection in stereotactic body radiotherapy (SBRT) treatment for Stage I lung cancer in different fraction schemes. Methods: Treatment plans from 12 patients suffering from Stage I lung cancer were designed using the 6XFFF and 10XFFF beams in different fraction schemes of 4 × 12, 3 × 18 and 1 × 34 Gy. Plans were evaluated mainly in terms of organs at risk (OARs) sparing, normal tissue complication probability (NTCP) estimation and treatment efficiency. Results: Compared with the 10XFFF beam, 6XFFF beam showed statistically significant lower dose to all the OARs investigated. The percentage of NTCP reduction for both lung and chest wall was about 10% in the fraction schemes of 4 × 12 and 3 × 18 Gy, whereas only 7.4% and 2.6% was obtained in the 1 × 34 Gy scheme. For oesophagus, heart and spinal cord, the reduction was greater with the 6XFFF beam, but their absolute estimates were <10−6%. The mean beam-on time for 6XFFF and 10XFFF beams at 4 × 12, 3 × 18 and 1 × 34 Gy schemes were 2.2 ± 0.2 vs 1.5 ± 0.1, 3.3 ± 0.9 vs 2.0 ± 0.5 and 6.3 ± 0.9 vs 3.5 ± 0.4 min, respectively. Conclusion: The 6XFFF beam obtains better OARs sparing and lower incidence of NTCP in SBRT treatment of Stage I lung cancer, whereas the 10XFFF beam improves the treatment efficiency. To balance the OARs sparing and intrafractional variation owing to the prolonged treatment time, the authors recommend using the 6XFFF beam in the 4 × 12 and 3 × 18 Gy schemes but the 10XFFF beam in the 1 × 34 Gy scheme. Advances in knowledge: This study optimizes the FFF beam selection in different fraction schemes in SBRT treatment of Stage I lung cancer. PMID:26133073

  5. Filtering apparatus

    DOEpatents

    Haldipur, G.B.; Dilmore, W.J.

    1992-09-01

    A vertical vessel is described having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas. 18 figs.

  6. Filtering apparatus

    DOEpatents

    Haldipur, Gaurang B.; Dilmore, William J.

    1992-01-01

    A vertical vessel having a lower inlet and an upper outlet enclosure separated by a main horizontal tube sheet. The inlet enclosure receives the flue gas from a boiler of a power system and the outlet enclosure supplies cleaned gas to the turbines. The inlet enclosure contains a plurality of particulate-removing clusters, each having a plurality of filter units. Each filter unit includes a filter clean-gas chamber defined by a plate and a perforated auxiliary tube sheet with filter tubes suspended from each tube sheet and a tube connected to each chamber for passing cleaned gas to the outlet enclosure. The clusters are suspended from the main tube sheet with their filter units extending vertically and the filter tubes passing through the tube sheet and opening in the outlet enclosure. The flue gas is circulated about the outside surfaces of the filter tubes and the particulate is absorbed in the pores of the filter tubes. Pulses to clean the filter tubes are passed through their inner holes through tubes free of bends which are aligned with the tubes that pass the clean gas.

  7. Kaon Filtering For CLAS Data

    SciTech Connect

    McNabb, J.

    2001-01-30

    The analysis of data from CLAS is a multi-step process. After the detectors for a given running period have been calibrated, the data is processed in the so called pass-1 cooking. During the pass-1 cooking each event is reconstructed by the program a1c which finds particle tracks and computes momenta from the raw data. The results are then passed on to several data monitoring and filtering utilities. In CLAS software, a filter is a parameterless function which returns an integer indicating whether an event should be kept by that filter or not. There is a main filter program called g1-filter which controls several specific filters and outputs several files, one for each filter. These files may then be analyzed separately, allowing individuals interested in one reaction channel to work from smaller files than using the whole data set would require. There are several constraints on what the filter functions should do. Obviously, the filtered files should be as small as possible, however the filter should also not reject any events that might be used in the later analysis for which the filter was intended.

  8. Hot-gas filter manufacturing assessments: Volume 5. Final report, April 15, 1997

    SciTech Connect

    Boss, D.E.

    1997-12-31

    The development of advanced filtration media for advanced fossil-fueled power generating systems is a critical step in meeting the performance and emissions requirements for these systems. While porous metal and ceramic candle-filters have been available for some time, the next generation of filters will include ceramic-matrix composites (CMCs), intermetallic alloys, and alternate filter geometries. The goal of this effort was to perform a cursory review of the manufacturing processes used by 5 companies developing advanced filters from the perspective of process repeatability and the ability for their processes to be scale-up to production volumes. It was found that all of the filter manufacturers had a solid understanding of the product development path. Given that these filters are largely developmental, significant additional work is necessary to understand the process-performance relationships and projecting manufacturing costs. While each organization had specific needs, some common among all of the filter manufacturers were access to performance testing of the filters to aide process/product development, a better understanding of the stresses the filters will see in service for use in structural design of the components, and a strong process sensitivity study to allow optimization of processing.

  9. Neutral density filters with Risley prisms: analysis and design.

    PubMed

    Duma, Virgil-Florin; Nicolov, Mirela

    2009-05-10

    We achieve the analysis and design of optical attenuators with double-prism neutral density filters. A comparative study is performed on three possible device configurations; only two are presented in the literature but without their design calculus. The characteristic parameters of this optical attenuator with Risley translating prisms for each of the three setups are defined and their analytical expressions are derived: adjustment scale (attenuation range) and interval, minimum transmission coefficient and sensitivity. The setups are compared to select the optimal device, and, from this study, the best solution for double-prism neutral density filters, both from a mechanical and an optical point of view, is determined with two identical, symmetrically movable, no mechanical contact prisms. The design calculus of this optimal device is developed in essential steps. The parameters of the prisms, particularly their angles, are studied to improve the design, and we demonstrate the maximum attenuation range that this type of attenuator can provide.

  10. Stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  11. SU-E-T-591: Optimizing the Flattening Filter Free Beam Selection in RapidArc-Based Stereotactic Body Radiotherapy for Stage I Lung Cancer

    SciTech Connect

    Huang, B-T; Lu, J-Y

    2015-06-15

    Purpose: To optimize the flattening filter free (FFF) beam energy selection in stereotactic body radiotherapy (SBRT) treatment for stage I lung cancer with different fraction schemes. Methods: Twelve patients suffering from stage I lung cancer were enrolled in this study. Plans were designed using 6XFFF and 10XFFF beams with the most widely used fraction schemes of 4*12 Gy, 3*18 Gy and 1*34 Gy, respectively. The plan quality was appraised in terms of planning target volume (PTV) coverage, conformity of the prescribed dose (CI100%), intermediate dose spillage (R50% and D2cm), organs at risk (OARs) sparing and beam-on time. Results: The 10XFFF beam predicted 1% higher maximum, mean dose to the PTV and 4–5% higher R50% compared with the 6XFFF beam in the three fraction schemes, whereas the CI100% and D2cm was similar. Most importantly, the 6XFFF beam exhibited 3–10% lower dose to all the OARs. However, the 10XFFF beam reduced the beam-on time by 31.9±7.2%, 38.7±2.8% and 43.6±4.0% compared with the 6XFFF beam in the 4*12 Gy, 3*18 Gy and 1*34 Gy schemes, respectively. Beam-on time was 2.2±0.2 vs 1.5±0.1, 3.3±0.9 vs 2.0±0.5 and 6.3±0.9 vs 3.5±0.4 minutes for the 6XFFF and 10XFFF one in the three fraction schemes. Conclusion: The 6XFFF beam obtains better OARs sparing in SBRT treatment for stage I lung cancer, but the 10XFFF one provides improved treatment efficiency. To balance the OARs sparing and intrafractional variation as a function of prolonged treatment time, the authors recommend to use the 6XFFF beam in the 4*12 Gy and 3*18 Gy schemes for better OARs sparing. However, for the 1*34 Gy scheme, the 10XFFF beam is recommended to achieve improved treatment efficiency.

  12. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  13. PHOEBE - step by step manual

    NASA Astrophysics Data System (ADS)

    Zasche, P.

    2016-03-01

    An easy step-by-step manual of PHOEBE is presented. It should serve as a starting point for the first time users of PHOEBE analyzing the eclipsing binary light curve. It is demonstrated on one particular detached system also with the downloadable data and the whole procedure is described easily till the final trustworthy fit is being reached.

  14. High-resolution wave-theory-based ultrasound reflection imaging using the split-step fourier and globally optimized fourier finite-difference methods

    DOEpatents

    Huang, Lianjie

    2013-10-29

    Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Results from various data input to the method indicate significant improvements are provided in both image quality and resolution.

  15. Hierarchical Bayes Ensemble Kalman Filter for geophysical data assimilation

    NASA Astrophysics Data System (ADS)

    Tsyrulnikov, Michael; Rakitko, Alexander

    2016-04-01

    In the Ensemble Kalman Filter (EnKF), the forecast error covariance matrix B is estimated from a sample (ensemble), which inevitably implies a degree of uncertainty. This uncertainty is especially large in high dimensions, where the affordable ensemble size is orders of magnitude less than the dimensionality of the system. Common remedies include ad-hoc devices like variance inflation and covariance localization. The goal of this study is to optimize the account for the inherent uncertainty of the B matrix in EnKF. Following the idea by Myrseth and Omre (2010), we explicitly admit that the B matrix is unknown and random and estimate it along with the state (x) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components P and Q of the B matrix into the extended control vector (x,P,Q). Similarly, we break the traditional forecast ensemble into the predictability-error related ensemble and model-error related ensemble. The reason for the separation of model errors from predictability errors is the fundamental difference between the two sources of error. Model error are external (i.e. do not depend on the filter's performance) whereas predictability errors are internal to a filter (i.e. are determined by the filter's behavior). At the analysis step, we specify Inverse Wishart based priors for the random matrices P and Q and conditionally Gaussian prior for the state x. Then, we update the prior distribution of (x,P,Q) using both observation and ensemble data, so that ensemble members are used as generalized observations and ordinary observations are allowed to influence the covariances. We show that for linear dynamics and linear observation operators, conditional Gaussianity of the state is preserved in the course of filtering. At the forecast

  16. Optical results with Rayleigh quotient discrimination filters

    NASA Astrophysics Data System (ADS)

    Juday, Richard D.; Rollins, John M.; Monroe, Stanley E., Jr.; Morelli, Michael V.

    1999-03-01

    We report experimental laboratory results using filters that optimize the Rayleigh quotient [Richard D. Juday, 'Generalized Rayleigh quotient approach to filter optimization,' JOSA-A 15(4), 777-790 (April 1998)] for discriminating between two similar objects. That quotient is the ratio of the correlation responses to two differing objects. In distinction from previous optical processing methods it includes the phase of both objects -- not the phase of only the 'accept' object -- in the computation of the filter. In distinction from digital methods it is explicitly constrained to optically realizable filter values throughout the optimization process.

  17. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  18. Aquatic Plants Aid Sewage Filter

    NASA Technical Reports Server (NTRS)

    Wolverton, B. C.

    1985-01-01

    Method of wastewater treatment combines micro-organisms and aquatic plant roots in filter bed. Treatment occurs as liquid flows up through system. Micro-organisms, attached themselves to rocky base material of filter, act in several steps to decompose organic matter in wastewater. Vascular aquatic plants (typically, reeds, rushes, cattails, or water hyacinths) absorb nitrogen, phosphorus, other nutrients, and heavy metals from water through finely divided roots.

  19. A novel band-pass filter based on a periodically drilled SIW structure

    NASA Astrophysics Data System (ADS)

    Coves, A.; Torregrosa-Penalva, G.; San-Blas, A. A.; Sánchez-Soriano, M. A.; Martellosio, A.; Bronchalo, E.; Bozzi, M.

    2016-04-01

    The design and fabrication of a band-pass step impedance filter based on high and low dielectric constant sections has been realized on substrate integrated waveguide (SIW) technology. The overall process includes the design of the ideal band-pass prototype filter, where the implementation of the impedance inverters has been carried out by means of waveguide sections of lower permittivity. This can be practically achieved by implementing arrays of air holes along the waveguide. Several SIW structures with and without arrays of air holes have been simulated and fabricated in order to experimentally evaluate their relative permittivity. Additionally, the equivalent filter in SIW technology has been designed and optimized. Finally, a prototype of the designed filter has been fabricated and measured, showing a good agreement between measurements and simulations, which demonstrates the validity of the proposed design approach.

  20. Modelling of diffraction grating based optical filters for fluorescence detection of biomolecules

    PubMed Central

    Kovačič, M.; Krč, J.; Lipovšek, B.; Topič, M.

    2014-01-01

    The detection of biomolecules based on fluorescence measurements is a powerful diagnostic tool for the acquisition of genetic, proteomic and cellular information. One key performance limiting factor remains the integrated optical filter, which is designed to reject strong excitation light while transmitting weak emission (fluorescent) light to the photodetector. Conventional filters have several disadvantages. For instance absorbing filters, like those made from amorphous silicon carbide, exhibit low rejection ratios, especially in the case of small Stokes’ shift fluorophores (e.g. green fluorescent protein GFP with λexc = 480 nm and λem = 510 nm), whereas interference filters comprising many layers require complex fabrication. This paper describes an alternative solution based on dielectric diffraction gratings. These filters are not only highly efficient but require a smaller number of manufacturing steps. Using FEM-based optical modelling as a design optimization tool, three filtering concepts are explored: (i) a diffraction grating fabricated on the surface of an absorbing filter, (ii) a diffraction grating embedded in a host material with a low refractive index, and (iii) a combination of an embedded grating and an absorbing filter. Both concepts involving an embedded grating show high rejection ratios (over 100,000) for the case of GFP, but also high sensitivity to manufacturing errors and variations in the incident angle of the excitation light. Despite this, simulations show that a 60 times improvement in the rejection ratio relative to a conventional flat absorbing filter can be obtained using an optimized embedded diffraction grating fabricated on top of an absorbing filter. PMID:25071964

  1. Bioaerosol DNA Extraction Technique from Air Filters Collected from Marine and Freshwater Locations

    NASA Astrophysics Data System (ADS)

    Beckwith, M.; Crandall, S. G.; Barnes, A.; Paytan, A.

    2015-12-01

    Bioaerosols are composed of microorganisms suspended in air. Among these organisms include bacteria, fungi, virus, and protists. Microbes introduced into the atmosphere can drift, primarily by wind, into natural environments different from their point of origin. Although bioaerosols can impact atmospheric dynamics as well as the ecology and biogeochemistry of terrestrial systems, very little is known about the composition of bioaerosols collected from marine and freshwater environments. The first step to determine composition of airborne microbes is to successfully extract environmental DNA from air filters. We asked 1) can DNA be extracted from quartz (SiO2) air filters? and 2) how can we optimize the DNA yield for downstream metagenomic sequencing? Aerosol filters were collected and archived on a weekly basis from aquatic sites (USA, Bermuda, Israel) over the course of 10 years. We successfully extracted DNA from a subsample of ~ 20 filters. We modified a DNA extraction protocol (Qiagen) by adding a beadbeating step to mechanically shear cell walls in order to optimize our DNA product. We quantified our DNA yield using a spectrophotometer (Nanodrop 1000). Results indicate that DNA can indeed be extracted from quartz filters. The additional beadbeating step helped increase our yield - up to twice as much DNA product was obtained compared to when this step was omitted. Moreover, bioaerosol DNA content does vary across time. For instance, the DNA extracted from filters from Lake Tahoe, USA collected near the end of June decreased from 9.9 ng/μL in 2007 to 3.8 ng/μL in 2008. Further next-generation sequencing analysis of our extracted DNA will be performed to determine the composition of these microbes. We will also model the meteorological and chemical factors that are good predictors for microbial composition for our samples over time and space.

  2. Filter apparatus

    DOEpatents

    Kuban, D.P.; Singletary, B.H.; Evans, J.H.

    A plurality of holding tubes are respectively mounted in apertures in a partition plate fixed in a housing receiving gas contaminated with particulate material. A filter cartridge is removably held in each holding tube, and the cartridges and holding tubes are arranged so that gas passes through apertures therein and across the the partition plate while particulate material is collected in the cartridges. Replacement filter cartridges are respectively held in holding canisters mounted on a support plate which can be secured to the aforesaid housing, and screws mounted on said canisters are arranged to push replacement cartridges into the cartridge holding tubes and thereby eject used cartridges therefrom.

  3. Water Filters

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Seeking to find a more effective method of filtering potable water that was highly contaminated, Mike Pedersen, founder of Western Water International, learned that NASA had conducted extensive research in methods of purifying water on board manned spacecraft. The key is Aquaspace Compound, a proprietary WWI formula that scientifically blends various types of glandular activated charcoal with other active and inert ingredients. Aquaspace systems remove some substances; chlorine, by atomic adsorption, other types of organic chemicals by mechanical filtration and still others by catalytic reaction. Aquaspace filters are finding wide acceptance in industrial, commercial, residential and recreational applications in the U.S. and abroad.

  4. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays.

    PubMed

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2016-01-01

    This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices.

  5. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays.

    PubMed

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2016-01-01

    This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices. PMID:27338387

  6. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays

    PubMed Central

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2016-01-01

    This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices. PMID:27338387

  7. Filter selection based on light source for multispectral imaging

    NASA Astrophysics Data System (ADS)

    Xu, Peng; Xu, Haisong

    2016-07-01

    In multispectral imaging, it is necessary to select a reduced number of filters to balance the imaging efficiency and spectral reflectance recovery accuracy. Due to the combined effect of filters and light source on reflectance recovery, the optimal filters are influenced by the employed light source in the multispectral imaging system. By casting the filter selection as an optimization issue, the selection of optimal filters corresponding to the employed light source proceeds with respect to a set of target samples utilizing one kind of genetic algorithms, regardless of the detailed spectral characteristics of the light source, filters, and sensor. Under three light sources with distinct spectral power distributions, the proposed filter selection method was evaluated on a filter-wheel based multispectral device with a set of interference filters. It was verified that the filters derived by the proposed method achieve better spectral and colorimetric accuracy of reflectance recovery than the conventional one under different light sources.

  8. Carbon nanotube filters

    NASA Astrophysics Data System (ADS)

    Srivastava, A.; Srivastava, O. N.; Talapatra, S.; Vajtai, R.; Ajayan, P. M.

    2004-09-01

    Over the past decade of nanotube research, a variety of organized nanotube architectures have been fabricated using chemical vapour deposition. The idea of using nanotube structures in separation technology has been proposed, but building macroscopic structures that have controlled geometric shapes, density and dimensions for specific applications still remains a challenge. Here we report the fabrication of freestanding monolithic uniform macroscopic hollow cylinders having radially aligned carbon nanotube walls, with diameters and lengths up to several centimetres. These cylindrical membranes are used as filters to demonstrate their utility in two important settings: the elimination of multiple components of heavy hydrocarbons from petroleum-a crucial step in post-distillation of crude oil-with a single-step filtering process, and the filtration of bacterial contaminants such as Escherichia coli or the nanometre-sized poliovirus (~25 nm) from water. These macro filters can be cleaned for repeated filtration through ultrasonication and autoclaving. The exceptional thermal and mechanical stability of nanotubes, and the high surface area, ease and cost-effective fabrication of the nanotube membranes may allow them to compete with ceramic- and polymer-based separation membranes used commercially.

  9. Notch filter

    NASA Technical Reports Server (NTRS)

    Shelton, G. B. (Inventor)

    1977-01-01

    A notch filter for the selective attenuation of a narrow band of frequencies out of a larger band was developed. A helical resonator is connected to an input circuit and an output circuit through discrete and equal capacitors, and a resistor is connected between the input and the output circuits.

  10. Initial Ares I Bending Filter Design

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark

    2007-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.

  11. Influence of multi-step heat treatments in creep age forming of 7075 aluminum alloy: Optimization for springback, strength and exfoliation corrosion

    SciTech Connect

    Arabi Jeshvaghani, R.; Zohdi, H.; Shahverdi, H.R.; Bozorg, M.; Hadavi, S.M.M.

    2012-11-15

    Multi-step heat treatments comprise of high temperature forming (150 Degree-Sign C/24 h plus 190 Degree-Sign C for several minutes) and subsequent low temperature forming (120 Degree-Sign C for 24 h) is developed in creep age forming of 7075 aluminum alloy to decrease springback and exfoliation corrosion susceptibility without reduction in tensile properties. The results show that the multi-step heat treatment gives the low springback and the best combination of exfoliation corrosion resistance and tensile strength. The lower springback is attributed to the dislocation recovery and more stress relaxation at higher temperature. Transmission electron microscopy observations show that corrosion resistance is improved due to the enlargement in the size and the inter-particle distance of the grain boundaries precipitates. Furthermore, the achievement of the high strength is related to the uniform distribution of ultrafine {eta} Prime precipitates within grains. - Highlights: Black-Right-Pointing-Pointer Creep age forming developed for manufacturing of aircraft wing panels by aluminum alloy. Black-Right-Pointing-Pointer A good combination of properties with minimal springback is required in this component. Black-Right-Pointing-Pointer This requirement can be improved through the appropriate heat treatments. Black-Right-Pointing-Pointer Multi-step cycles developed in creep age forming of AA7075 for improving of springback and properties. Black-Right-Pointing-Pointer Results indicate simultaneous enhancing the properties and shape accuracy (lower springback).

  12. Optimization of medium for one-step fermentation of inulin extract from Jerusalem artichoke tubers using Paenibacillus polymyxa ZJ-9 to produce R,R-2,3-butanediol.

    PubMed

    Gao, Jian; Xu, Hong; Li, Qiu-jie; Feng, Xiao-hai; Li, Sha

    2010-09-01

    The medium for one-step fermentation of raw inulin extract from Jerusalem artichoke tubers by Paenibacillus polymyxa ZJ-9 to produce R,R-2,3-butanediol (R,R-2,3-BD) was developed. Inulin, K(2)HPO(4) and NH(4)Cl were found to be the key factors in the fermentation according to the results obtained from the Plackett-Burman experimental design. The optimal concentration range of the three factors was examined by the steepest ascent path, and their optimal concentration was further investigated according to the Box-Behnken design and determined to be 77.14 g/L, 3.09 g/L and 0.93 g/L, respectively. Under the optimal conditions, the concentration of the obtained R,R-2,3-BD was 36.92 g/L, at more than 98% optical purity. Compared with other investigated carbon resources, fermentation of the raw inulin extract afforded the highest yield of R,R-2,3-BD. This process featured one-step fermentation of inulin without further hydrolyzing, which greatly decreased the raw material cost and thus facilitated its practical application.

  13. Plasmonic filters.

    SciTech Connect

    Passmore, Brandon Scott; Shaner, Eric Arthur; Barrick, Todd A.

    2009-09-01

    Metal films perforated with subwavelength hole arrays have been show to demonstrate an effect known as Extraordinary Transmission (EOT). In EOT devices, optical transmission passbands arise that can have up to 90% transmission and a bandwidth that is only a few percent of the designed center wavelength. By placing a tunable dielectric in proximity to the EOT mesh, one can tune the center frequency of the passband. We have demonstrated over 1 micron of passive tuning in structures designed for an 11 micron center wavelength. If a suitable midwave (3-5 micron) tunable dielectric (perhaps BaTiO{sub 3}) were integrated with an EOT mesh designed for midwave operation, it is possible that a fast, voltage tunable, low temperature filter solution could be demonstrated with a several hundred nanometer passband. Such an element could, for example, replace certain components in a filter wheel solution.

  14. Water Filter

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A compact, lightweight electrolytic water sterilizer available through Ambassador Marketing, generates silver ions in concentrations of 50 to 100 parts per billion in water flow system. The silver ions serve as an effective bactericide/deodorizer. Tap water passes through filtering element of silver that has been chemically plated onto activated carbon. The silver inhibits bacterial growth and the activated carbon removes objectionable tastes and odors caused by addition of chlorine and other chemicals in municipal water supply. The three models available are a kitchen unit, a "Tourister" unit for portable use while traveling and a refrigerator unit that attaches to the ice cube water line. A filter will treat 5,000 to 10,000 gallons of water.

  15. Optimization of pressurized liquid extraction using a multivariate chemometric approach and comparison of solid-phase extraction cleanup steps for the determination of polycyclic aromatic hydrocarbons in mosses.

    PubMed

    Foan, L; Simon, V

    2012-09-21

    A factorial design was used to optimize the extraction of polycyclic aromatic hydrocarbons (PAHs) from mosses, plants used as biomonitors of air pollution. The analytical procedure consists of pressurized liquid extraction (PLE) followed by solid-phase extraction (SPE) cleanup, in association with analysis by high performance liquid chromatography coupled with fluorescence detection (HPLC-FLD). For method development, homogeneous samples were prepared with large quantities of the mosses Isothecium myosuroides Brid. and Hypnum cupressiforme Hedw., collected from a Spanish Nature Reserve. A factorial design was used to identify the optimal PLE operational conditions: 2 static cycles of 5 min at 80 °C. The analytical procedure performed with PLE showed similar recoveries (∼70%) and total PAH concentrations (∼200 ng g(-1)) as found using Soxtec extraction, with the advantage of reducing solvent consumption by 3 (30 mL against 100mL per sample), and taking a fifth of the time (24 samples extracted automatically in 8h against 2 samples in 3.5h). The performance of SPE normal phases (NH(2), Florisil, silica and activated aluminium) generally used for organic matrix cleanup was also compared. Florisil appeared to be the most selective phase and ensured the highest PAH recoveries. The optimal analytical procedure was validated with a reference material and applied to moss samples from a remote Spanish site in order to determine spatial and inter-species variability.

  16. Optimization of pressurized liquid extraction using a multivariate chemometric approach and comparison of solid-phase extraction cleanup steps for the determination of polycyclic aromatic hydrocarbons in mosses.

    PubMed

    Foan, L; Simon, V

    2012-09-21

    A factorial design was used to optimize the extraction of polycyclic aromatic hydrocarbons (PAHs) from mosses, plants used as biomonitors of air pollution. The analytical procedure consists of pressurized liquid extraction (PLE) followed by solid-phase extraction (SPE) cleanup, in association with analysis by high performance liquid chromatography coupled with fluorescence detection (HPLC-FLD). For method development, homogeneous samples were prepared with large quantities of the mosses Isothecium myosuroides Brid. and Hypnum cupressiforme Hedw., collected from a Spanish Nature Reserve. A factorial design was used to identify the optimal PLE operational conditions: 2 static cycles of 5 min at 80 °C. The analytical procedure performed with PLE showed similar recoveries (∼70%) and total PAH concentrations (∼200 ng g(-1)) as found using Soxtec extraction, with the advantage of reducing solvent consumption by 3 (30 mL against 100mL per sample), and taking a fifth of the time (24 samples extracted automatically in 8h against 2 samples in 3.5h). The performance of SPE normal phases (NH(2), Florisil, silica and activated aluminium) generally used for organic matrix cleanup was also compared. Florisil appeared to be the most selective phase and ensured the highest PAH recoveries. The optimal analytical procedure was validated with a reference material and applied to moss samples from a remote Spanish site in order to determine spatial and inter-species variability. PMID:22885040

  17. Microfabrication of three-dimensional filters for liposome extrusion

    NASA Astrophysics Data System (ADS)

    Baldacchini, Tommaso; Nuñez, Vicente; LaFratta, Christopher N.; Grech, Joseph S.; Vullev, Valentine I.; Zadoyan, Ruben

    2015-03-01

    Liposomes play a relevant role in the biomedical field of drug delivery. The ability of these lipid vesicles to encapsulate and transport a variety of bioactive molecules has fostered their use in several therapeutic applications, from cancer treatments to the administration of drugs with antiviral activities. Size and uniformity are key parameters to take into consideration when preparing liposomes; these factors greatly influence their effectiveness in both in vitro and in vivo experiments. A popular technique employed to achieve the optimal liposome dimension (around 100 nm in diameter) and uniform size distribution is repetitive extrusion through a polycarbonate filter. We investigated two femtosecond laser direct writing techniques for the fabrication of three-dimensional filters within a microfluidics chip for liposomes extrusion. The miniaturization of the extrusion process in a microfluidic system is the first step toward a complete solution for lab-on-a-chip preparation of liposomes from vesicles self-assembly to optical characterization.

  18. Eyeglass Filters

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Biomedical Optical Company of America's suntiger lenses eliminate more than 99% of harmful light wavelengths. NASA derived lenses make scenes more vivid in color and also increase the wearer's visual acuity. Distant objects, even on hazy days, appear crisp and clear; mountains seem closer, glare is greatly reduced, clouds stand out. Daytime use protects the retina from bleaching in bright light, thus improving night vision. Filtering helps prevent a variety of eye disorders, in particular cataracts and age related macular degeneration.

  19. Multilevel ensemble Kalman filtering

    DOE PAGES

    Hoel, Hakon; Law, Kody J. H.; Tempone, Raul

    2016-06-14

    This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.

  20. Stepped nozzle

    DOEpatents

    Sutton, George P.

    1998-01-01

    An insert which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment.

  1. Stepped nozzle

    DOEpatents

    Sutton, G.P.

    1998-07-14

    An insert is described which allows a supersonic nozzle of a rocket propulsion system to operate at two or more different nozzle area ratios. This provides an improved vehicle flight performance or increased payload. The insert has significant advantages over existing devices for increasing nozzle area ratios. The insert is temporarily fastened by a simple retaining mechanism to the aft end of the diverging segment of the nozzle and provides for a multi-step variation of nozzle area ratio. When mounted in place, the insert provides the nozzle with a low nozzle area ratio. During flight, the retaining mechanism is released and the insert ejected thereby providing a high nozzle area ratio in the diverging nozzle segment. 5 figs.

  2. The Lockheed alternate partial polarizer universal filter

    NASA Technical Reports Server (NTRS)

    Title, A. M.

    1976-01-01

    A tunable birefringent filter using an alternate partial polarizer design has been built. The filter has a transmission of 38% in polarized light. Its full width at half maximum is .09A at 5500A. It is tunable from 4500 to 8500A by means of stepping motor actuated rotating half wave plates and polarizers. Wave length commands and thermal compensation commands are generated by a PPD 11/10 minicomputer. The alternate partial polarizer universal filter is compared with the universal birefringent filter and the design techniques, construction methods, and filter performance are discussed in some detail. Based on the experience of this filter some conclusions regarding the future of birefringent filters are elaborated.

  3. Next Step toward Optimization of GRP Receptor Avidities: Determination of the Minimal Distance between BBN(7-14) Units in Peptide Homodimers.

    PubMed

    Fischer, G; Lindner, S; Litau, S; Schirrmacher, R; Wängler, B; Wängler, C

    2015-08-19

    As the gastrin releasing peptide receptor (GRPR) is overexpressed on several tumor types, it represents a promising target for the specific in vivo imaging of these tumors using positron emission tomography (PET). We were able to show that PESIN-based peptide multimers can result in substantially higher GRPR avidities, highly advantageous in vivo pharmacokinetics and tumor imaging properties compared to the respective monomers. However, the minimal distance between the peptidic binders, resulting in the lowest possible system entropy while enabling a concomitant GRPR binding and thus optimized receptor avidities, has not been determined so far. Thus, we aimed here to identify the minimal distance between two GRPR-binding peptides in order to provide the basis for the development of highly avid GRPR-specific PET imaging agents. We therefore synthesized dimers of the GRPR-binding bombesin analogue BBN(7-14) on a dendritic scaffold, exhibiting different distances between both peptide binders. The homodimers were further modified with the chelator NODAGA, radiolabeled with (68)Ga, and evaluated in vitro regarding their GRPR avidity. We found that the most potent of the newly developed radioligands exhibits GRPR avidity twice as high as the most potent reference compound known so far, and that a minimal distance of 62 bond lengths between both peptidic binders within the homodimer can result in concomitant peptide binding and optimal GRPR avidities. These findings answer the question as to what molecular design should be chosen when aiming at the development of highly avid homobivalent peptidic ligands addressing the GRPR.

  4. Ceramic filters

    SciTech Connect

    Holmes, B.L.; Janney, M.A.

    1995-12-31

    Filters were formed from ceramic fibers, organic fibers, and a ceramic bond phase using a papermaking technique. The distribution of particulate ceramic bond phase was determined using a model silicon carbide system. As the ceramic fiber increased in length and diameter the distance between particles decreased. The calculated number of particles per area showed good agreement with the observed value. After firing, the papers were characterized using a biaxial load test. The strength of papers was proportional to the amount of bond phase included in the paper. All samples exhibited strain-tolerant behavior.

  5. Sub-wavelength efficient polarization filter (SWEP filter)

    DOEpatents

    Simpson, Marcus L.; Simpson, John T.

    2003-12-09

    A polarization sensitive filter includes a first sub-wavelength resonant grating structure (SWS) for receiving incident light, and a second SWS. The SWS are disposed relative to one another such that incident light which is transmitted by the first SWS passes through the second SWS. The filter has a polarization sensitive resonance, the polarization sensitive resonance substantially reflecting a first polarization component of incident light while substantially transmitting a second polarization component of the incident light, the polarization components being orthogonal to one another. A method for forming polarization filters includes the steps of forming first and second SWS, the first and second SWS disposed relative to one another such that a portion of incident light applied to the first SWS passes through the second SWS. A method for separating polarizations of light, includes the steps of providing a filter formed from a first and second SWS, shining incident light having orthogonal polarization components on the first SWS, and substantially reflecting one of the orthogonal polarization components while substantially transmitting the other orthogonal polarization component. A high Q narrowband filter includes a first and second SWS, the first and second SWS are spaced apart a distance being at least one half an optical wavelength.

  6. Volterra filters for quantum estimation and detection

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2015-12-01

    The implementation of optimal statistical inference protocols for high-dimensional quantum systems is often computationally expensive. To avoid the difficulties associated with optimal techniques, here I propose an alternative approach to quantum estimation and detection based on Volterra filters. Volterra filters have a clear hierarchy of computational complexities and performances, depend only on finite-order correlation functions, and are applicable to systems with no simple Markovian model. These features make Volterra filters appealing alternatives to optimal nonlinear protocols for the inference and control of complex quantum systems. Applications of the first-order Volterra filter to continuous-time quantum filtering, the derivation of a Heisenberg-picture uncertainty relation, quantum state tomography, and qubit readout are discussed.

  7. Rocket noise filtering system using digital filters

    NASA Technical Reports Server (NTRS)

    Mauritzen, David

    1990-01-01

    A set of digital filters is designed to filter rocket noise to various bandwidths. The filters are designed to have constant group delay and are implemented in software on a general purpose computer. The Parks-McClellan algorithm is used. Preliminary tests are performed to verify the design and implementation. An analog filter which was previously employed is also simulated.

  8. First-moment filters for spatial independent cluster processes

    NASA Astrophysics Data System (ADS)

    Swain, Anthony; Clark, Daniel E.

    2010-04-01

    A group target is a collection of individual targets which are, for example, part of a convoy of articulated vehicles or a crowd of football supporters and can be represented mathematically as a spatial cluster process. The process of detecting, tracking and identifying group targets requires the estimation of the evolution of such a dynamic spatial cluster process in time based on a sequence of partial observation sets. A suitable generalisation of the Bayes filter for this system would provide us with an optimal (but computationally intractable) estimate of a multi-group multi-object state based on measurements received up to the current time-step. In this paper, we derive the first-moment approximation of the multi-group multi-target Bayes filter, inspired by the first-moment multi-object Bayes filter derived by Mahler. Such approximations are Bayes optimal and provide estimates for the number of clusters (groups) and their positions in the group state-space, as well as estimates for the number of cluster components (object targets) and their positions in target state-space.

  9. Solution of two-dimensional electromagnetic scattering problem by FDTD with optimal step size, based on a semi-norm analysis

    SciTech Connect

    Monsefi, Farid; Carlsson, Linus; Silvestrov, Sergei; Rančić, Milica; Otterskog, Magnus

    2014-12-10

    To solve the electromagnetic scattering problem in two dimensions, the Finite Difference Time Domain (FDTD) method is used. The order of convergence of the FDTD algorithm, solving the two-dimensional Maxwell’s curl equations, is estimated in two different computer implementations: with and without an obstacle in the numerical domain of the FDTD scheme. This constitutes an electromagnetic scattering problem where a lumped sinusoidal current source, as a source of electromagnetic radiation, is included inside the boundary. Confined within the boundary, a specific kind of Absorbing Boundary Condition (ABC) is chosen and the outside of the boundary is in form of a Perfect Electric Conducting (PEC) surface. Inserted in the computer implementation, a semi-norm has been applied to compare different step sizes in the FDTD scheme. First, the domain of the problem is chosen to be the free-space without any obstacles. In the second part of the computer implementations, a PEC surface is included as the obstacle. The numerical instability of the algorithms can be rather easily avoided with respect to the Courant stability condition, which is frequently used in applying the general FDTD algorithm.

  10. The optimization of essential oils supercritical CO2 extraction from Lavandula hybrida through static-dynamic steps procedure and semi-continuous technique using response surface method

    PubMed Central

    Kamali, Hossein; Aminimoghadamfarouj, Noushin; Golmakani, Ebrahim; Nematollahi, Alireza

    2015-01-01

    Aim: The aim of this study was to examine and evaluate crucial variables in essential oils extraction process from Lavandula hybrida through static-dynamic and semi-continuous techniques using response surface method. Materials and Methods: Essential oil components were extracted from Lavandula hybrida (Lavandin) flowers using supercritical carbon dioxide via static-dynamic steps (SDS) procedure, and semi-continuous (SC) technique. Results: Using response surface method the optimum extraction yield (4.768%) was obtained via SDS at 108.7 bar, 48.5°C, 120 min (static: 8×15), 24 min (dynamic: 8×3 min) in contrast to the 4.620% extraction yield for the SC at 111.6 bar, 49.2°C, 14 min (static), 121.1 min (dynamic). Conclusion: The results indicated that a substantial reduction (81.56%) solvent usage (kg CO2/g oil) is observed in the SDS method versus the conventional SC method. PMID:25598636

  11. Nonlinear Filtering with Fractional Brownian Motion

    SciTech Connect

    Amirdjanova, A.

    2002-12-19

    Our objective is to study a nonlinear filtering problem for the observation process perturbed by a Fractional Brownian Motion (FBM) with Hurst index 1/2 optimal filter is derived.

  12. Preparation of Prussian Blue Submicron Particles with a Pore Structure by Two-Step Optimization for Na-Ion Battery Cathodes.

    PubMed

    Chen, Renjie; Huang, Yongxin; Xie, Man; Zhang, Qianyun; Zhang, XiaoXiao; Li, Li; Wu, Feng

    2016-06-29

    Traditional Prussian blue (Fe4[Fe(CN)6]3) synthesized by simple rapid precipitation shows poor electrochemical performance because of the presence of vacancies occupied by coordinated water. When the precipitation rate is reduced and polyvinylpyrrolidone K-30 is added as a surface active agent, the as-prepared Prussian blue has fewer vacancies in the crystal structure than in that of traditional Prussian blue. It has a well-defined face-centered-cubic structure, which can provide large channels for Na(+) insertion/extraction. The material, synthesized by slow precipitation, has an initial discharge capacity of 113 mA h g(-1) and maintains 93 mA h g(-1) under a current density of 50 mA g(-1) after 150 charge-discharge cycles. After further optimization by a chemical etching method, the complex nanoporous structure of Prussian blue has a high Brunauer-Emmett-Teller surface area and a stable structure to achieve high specific capacity and long cycle life. Surprisingly, the electrode shows an initial discharge capacity of 115 mA h g(-1) and a Coulombic efficiency of approximately 100% with capacity retention of 96% after 150 cycles. Experimental results show that Prussian blue can also be used as a cathode for Na-ion batteries. PMID:27267656

  13. Further steps toward direct magnetic resonance (MR) imaging detection of neural action currents: optimization of MR sensitivity to transient and weak currents in a conductor.

    PubMed

    Pell, Gaby S; Abbott, David F; Fleming, Steven W; Prichard, James W; Jackson, Graeme D

    2006-05-01

    The characteristics of an MRI technique that could be used for direct detection of neuronal activity are investigated. It was shown that magnitude imaging using echo planar imaging can detect transient local currents. The sensitivity of this method was thoroughly investigated. A partial k-space EPI acquisition with homodyne reconstruction was found to increase the signal change. A unique sensitivity to the position of the current pulse within the imaging sequence was demonstrated with the greatest signal change occurring when the current pulse coincides with the acquisition of the center lines of k-space. The signal change was shown to be highly sensitive to the spatial position of the current conductor relative to the voxel. Furthermore, with the use of optimization of spatial and temporal placement of the current pulse, the level of signal change obtained at this lower limit of current detectability was considerably magnified. It was possible to detect a current of 1.7 microA applied for 20 ms with an imaging time of 1.8 min. The level of sensitivity observed in our study brings us closer to that theoretically required for the detection of action currents in nerves.

  14. Filter for biomedical imaging and image processing

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.; Ahmad, Imteyaz

    2006-07-01

    Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

  15. TU-C-BRE-11: 3D EPID-Based in Vivo Dosimetry: A Major Step Forward Towards Optimal Quality and Safety in Radiation Oncology Practice

    SciTech Connect

    Mijnheer, B; Mans, A; Olaciregui-Ruiz, I; Rozendaal, R; Spreeuw, H; Herk, M van

    2014-06-15

    Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is done offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.

  16. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  17. A simple methodological approach for counting and identifying culturable viruses adsorbed to cellulose nitrate membrane filters.

    PubMed

    Papageorgiou, G T; Mocé-Llivina, L; Christodoulou, C G; Lucena, F; Akkelidou, D; Ioannou, E; Jofre, J

    2000-01-01

    We identified conditions under which Buffalo green monkey cells grew on the surfaces of cellulose nitrate membrane filters in such a way that they covered the entire surface of each filter and penetrated through the pores. When such conditions were used, poliovirus that had previously been adsorbed on the membranes infected the cells and replicated. A plaque assay method and a quantal method (most probable number of cytopathic units) were used to detect and count the viruses adsorbed on the membrane filters. Polioviruses in aqueous suspensions were then concentrated by adsorption to cellulose membrane filters and were subsequently counted without elution, a step which is necessary when the commonly used methods are employed. The pore size of the membrane filter, the sample contents, and the sample volume were optimized for tap water, seawater, and a 0.25 M glycine buffer solution. The numbers of viruses recovered under the optimized conditions were more than 50% greater than the numbers counted by the standard plaque assay. When ceftazidime was added to the assay medium in addition to the antibiotics which are typically used, the method could be used to study natural samples with low and intermediate levels of microbial pollution without decontamination of the samples. This methodological approach also allowed plaque hybridization either directly on cellulose nitrate membranes or on Hybond N+ membranes after the preparations were transferred.

  18. ADVANCED HOT GAS FILTER DEVELOPMENT

    SciTech Connect

    E.S. Connolly; G.D. Forsythe

    2000-09-30

    DuPont Lanxide Composites, Inc. undertook a sixty-month program, under DOE Contract DEAC21-94MC31214, in order to develop hot gas candle filters from a patented material technology know as PRD-66. The goal of this program was to extend the development of this material as a filter element and fully assess the capability of this technology to meet the needs of Pressurized Fluidized Bed Combustion (PFBC) and Integrated Gasification Combined Cycle (IGCC) power generation systems at commercial scale. The principal objective of Task 3 was to build on the initial PRD-66 filter development, optimize its structure, and evaluate basic material properties relevant to the hot gas filter application. Initially, this consisted of an evaluation of an advanced filament-wound core structure that had been designed to produce an effective bulk filter underneath the barrier filter formed by the outer membrane. The basic material properties to be evaluated (as established by the DOE/METC materials working group) would include mechanical, thermal, and fracture toughness parameters for both new and used material, for the purpose of building a material database consistent with what is being done for the alternative candle filter systems. Task 3 was later expanded to include analysis of PRD-66 candle filters, which had been exposed to actual PFBC conditions, development of an improved membrane, and installation of equipment necessary for the processing of a modified composition. Task 4 would address essential technical issues involving the scale-up of PRD-66 candle filter manufacturing from prototype production to commercial scale manufacturing. The focus would be on capacity (as it affects the ability to deliver commercial order quantities), process specification (as it affects yields, quality, and costs), and manufacturing systems (e.g. QA/QC, materials handling, parts flow, and cost data acquisition). Any filters fabricated during this task would be used for product qualification tests

  19. Organic solvent-free air-assisted liquid-liquid microextraction for optimized extraction of illegal azo-based dyes and their main metabolite from spices, cosmetics and human bio-fluid samples in one step.

    PubMed

    Barfi, Behruz; Asghari, Alireza; Rajabi, Maryam; Sabzalian, Sedigheh

    2015-08-15

    Air-assisted liquid-liquid microextraction (AALLME) has unique capabilities to develop as an organic solvent-free and one-step microextraction method, applying ionic-liquids as extraction solvent and avoiding centrifugation step. Herein, a novel and simple eco-friendly method, termed one-step air-assisted liquid-liquid microextraction (OS-AALLME), was developed to extract some illegal azo-based dyes (including Sudan I to IV, and Orange G) from food and cosmetic products. A series of experiments were investigated to achieve the most favorable conditions (including extraction solvent: 77μL of 1-Hexyl-3-methylimidazolium hexafluorophosphate; sample pH 6.3, without salt addition; and extraction cycles: 25 during 100s of sonication) using a central composite design strategy. Under these conditions, limits of detection, linear dynamic ranges, enrichment factors and consumptive indices were in the range of 3.9-84.8ngmL(-1), 0.013-3.1μgmL(-1), 33-39, and 0.13-0.15, respectively. The results showed that -as well as its simplicity, fastness, and use of no hazardous disperser and extraction solvents- OS-AALLME is an enough sensitive and efficient method for the extraction of these dyes from complex matrices. After optimization and validation, OS-AALLME was applied to estimate the concentration of 1-amino-2-naphthol in human bio-fluids as a main reductive metabolite of selected dyes. Levels of 1-amino-2-naphthol in plasma and urinary excretion suggested that this compound may be used as a new potential biomarker of these dyes in human body. PMID:26149246

  20. Organic solvent-free air-assisted liquid-liquid microextraction for optimized extraction of illegal azo-based dyes and their main metabolite from spices, cosmetics and human bio-fluid samples in one step.

    PubMed

    Barfi, Behruz; Asghari, Alireza; Rajabi, Maryam; Sabzalian, Sedigheh

    2015-08-15

    Air-assisted liquid-liquid microextraction (AALLME) has unique capabilities to develop as an organic solvent-free and one-step microextraction method, applying ionic-liquids as extraction solvent and avoiding centrifugation step. Herein, a novel and simple eco-friendly method, termed one-step air-assisted liquid-liquid microextraction (OS-AALLME), was developed to extract some illegal azo-based dyes (including Sudan I to IV, and Orange G) from food and cosmetic products. A series of experiments were investigated to achieve the most favorable conditions (including extraction solvent: 77μL of 1-Hexyl-3-methylimidazolium hexafluorophosphate; sample pH 6.3, without salt addition; and extraction cycles: 25 during 100s of sonication) using a central composite design strategy. Under these conditions, limits of detection, linear dynamic ranges, enrichment factors and consumptive indices were in the range of 3.9-84.8ngmL(-1), 0.013-3.1μgmL(-1), 33-39, and 0.13-0.15, respectively. The results showed that -as well as its simplicity, fastness, and use of no hazardous disperser and extraction solvents- OS-AALLME is an enough sensitive and efficient method for the extraction of these dyes from complex matrices. After optimization and validation, OS-AALLME was applied to estimate the concentration of 1-amino-2-naphthol in human bio-fluids as a main reductive metabolite of selected dyes. Levels of 1-amino-2-naphthol in plasma and urinary excretion suggested that this compound may be used as a new potential biomarker of these dyes in human body.

  1. Generating an optimal DTM from airborne laser scanning data for landslide mapping in a tropical forest environment

    NASA Astrophysics Data System (ADS)

    Razak, Khamarrul Azahari; Santangelo, Michele; Van Westen, Cees J.; Straatsma, Menno W.; de Jong, Steven M.

    2013-05-01

    Landslide inventory maps are fundamental for assessing landslide susceptibility, hazard, and risk. In tropical mountainous environments, mapping landslides is difficult as rapid and dense vegetation growth obscures landslides soon after their occurrence. Airborne laser scanning (ALS) data have been used to construct the digital terrain model (DTM) under dense vegetation, but its reliability for landslide recognition in the tropics remains surprisingly unknown. This study evaluates the suitability of ALS for generating an optimal DTM for mapping landslides in the Cameron Highlands, Malaysia. For the bare-earth extraction, we used hierarchical robust filtering algorithm and a parameterization with three sequential filtering steps. After each filtering step, four interpolations techniques were applied, namely: (i) the linear prediction derived from the SCOP++ (SCP), (ii) the inverse distance weighting (IDW), (iii) the natural neighbor (NEN) and (iv) the topo-to-raster (T2R). We assessed the quality of 12 DTMs in two ways: (1) with respect to 448 field-measured terrain heights and (2) based on the interpretability of landslides. The lowest root-mean-square error (RMSE) was 0.89 m across the landscape using three filtering steps and linear prediction as interpolation method. However, we found that a less stringent DTM filtering unveiled more diagnostic micro-morphological features, but also retained some of vegetation. Hence, a combination of filtering steps is required for optimal landslide interpretation, especially in forested mountainous areas. IDW was favored as the interpolation technique because it combined computational times more reasonably without adding artifacts to the DTM than T2R and NEN, which performed relatively well in the first and second filtering steps, respectively. The laser point density and the resulting ground point density after filtering are key parameters for producing a DTM applicable to landslide identification. The results showed that the

  2. SU-E-I-62: Assessing Radiation Dose Reduction and CT Image Optimization Through the Measurement and Analysis of the Detector Quantum Efficiency (DQE) of CT Images Using Different Beam Hardening Filters

    SciTech Connect

    Collier, J; Aldoohan, S; Gill, K

    2014-06-01

    Purpose: Reducing patient dose while maintaining (or even improving) image quality is one of the foremost goals in CT imaging. To this end, we consider the feasibility of optimizing CT scan protocols in conjunction with the application of different beam-hardening filtrations and assess this augmentation through noise-power spectrum (NPS) and detector quantum efficiency (DQE) analysis. Methods: American College of Radiology (ACR) and Catphan phantoms (The Phantom Laboratory) were scanned with a 64 slice CT scanner when additional filtration of thickness and composition (e.g., copper, nickel, tantalum, titanium, and tungsten) had been applied. A MATLAB-based code was employed to calculate the image of noise NPS. The Catphan Image Owl software suite was then used to compute the modulated transfer function (MTF) responses of the scanner. The DQE for each additional filter, including the inherent filtration, was then computed from these values. Finally, CT dose index (CTDIvol) values were obtained for each applied filtration through the use of a 100 mm pencil ionization chamber and CT dose phantom. Results: NPS, MTF, and DQE values were computed for each applied filtration and compared to the reference case of inherent beam-hardening filtration only. Results showed that the NPS values were reduced between 5 and 12% compared to inherent filtration case. Additionally, CTDIvol values were reduced between 15 and 27% depending on the composition of filtration applied. However, no noticeable changes in image contrast-to-noise ratios were noted. Conclusion: The reduction in the quanta noise section of the NPS profile found in this phantom-based study is encouraging. The reduction in both noise and dose through the application of beam-hardening filters is reflected in our phantom image quality. However, further investigation is needed to ascertain the applicability of this approach to reducing patient dose while maintaining diagnostically acceptable image qualities in a

  3. Miniaturized dielectric waveguide filters

    NASA Astrophysics Data System (ADS)

    Sandhu, Muhammad Y.; Hunter, Ian C.

    2016-10-01

    Design techniques for a new class of integrated monolithic high-permittivity ceramic waveguide filters are presented. These filters enable a size reduction of 50% compared to air-filled transverse electromagnetic filters with the same unloaded Q-factor. Designs for Chebyshev and asymmetric generalised Chebyshev filter and a diplexer are presented with experimental results for an 1800 MHz Chebyshev filter and a 1700 MHz generalised Chebyshev filter showing excellent agreement with theory.

  4. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.; Green, Richard; Bian, Fuyan; Strauss, Michael A.; Buck, Zoë; Annis, James; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)

  5. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan; McGreer, Ian D.; Strauss, Michael A.; Annis, James; Buck, Zoë; Green, Richard; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-06-25

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).

  6. The use of filter media to determine filter cleanliness

    NASA Astrophysics Data System (ADS)

    Van Staden, S. J.; Haarhoff, J.

    It is general believed that a sand filter starts its life with new, perfectly clean media, which becomes gradually clogged with each filtration cycle, eventually getting to a point where either head loss or filtrate quality starts to deteriorate. At this point the backwash cycle is initiated and, through the combined action of air and water, returns the media to its original perfectly clean state. Reality, however, dictates otherwise. Many treatment plants visited a decade or more after commissioning are found to have unacceptably dirty filter sand and backwash systems incapable of returning the filter media to a desired state of cleanliness. In some cases, these problems are common ones encountered in filtration plants but many reasons for media deterioration remain elusive, falling outside of these common problems. The South African conditions of highly eutrophic surface waters at high temperatures, however, exacerbate the problems with dirty filter media. Such conditions often lead to the formation of biofilm in the filter media, which is shown to inhibit the effective backwashing of sand and carbon filters. A systematic investigation into filter media cleanliness was therefore started in 2002, ending in 2005, at the University of Johannesburg (the then Rand Afrikaans University). This involved media from eight South African Water Treatment Plants, varying between sand and sand-anthracite combinations and raw water types from eutrophic through turbid to low-turbidity waters. Five states of cleanliness and four fractions of specific deposit were identified relating to in situ washing, column washing, cylinder inversion and acid-immersion techniques. These were measured and the results compared to acceptable limits for specific deposit, as determined in previous studies, though expressed in kg/m 3. These values were used to determine the state of the filters. In order to gain greater insight into the composition of the specific deposits stripped from the media, a

  7. Design of order statistics filters using feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu. S.; Bochkarev, V. V.

    2016-08-01

    In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.

  8. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  9. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  10. An online novel adaptive filter for denoising time series measurements.

    PubMed

    Willis, Andrew J

    2006-04-01

    A nonstationary form of the Wiener filter based on a principal components analysis is described for filtering time series data possibly derived from noisy instrumentation. The theory of the filter is developed, implementation details are presented and two examples are given. The filter operates online, approximating the maximum a posteriori optimal Bayes reconstruction of a signal with arbitrarily distributed and non stationary statistics. PMID:16649562

  11. Genetically Engineered Microelectronic Infrared Filters

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    1998-01-01

    A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.

  12. HEPA filter dissolution process

    DOEpatents

    Brewer, K.N.; Murphy, J.A.

    1994-02-22

    A process is described for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal. 4 figures.

  13. Hepa filter dissolution process

    DOEpatents

    Brewer, Ken N.; Murphy, James A.

    1994-01-01

    A process for dissolution of spent high efficiency particulate air (HEPA) filters and then combining the complexed filter solution with other radioactive wastes prior to calcining the mixed and blended waste feed. The process is an alternate to a prior method of acid leaching the spent filters which is an inefficient method of treating spent HEPA filters for disposal.

  14. Recirculating electric air filter

    DOEpatents

    Bergman, W.

    1985-01-09

    An electric air filter cartridge has a cylindrical inner high voltage electrode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  15. Recirculating electric air filter

    DOEpatents

    Bergman, Werner

    1986-01-01

    An electric air filter cartridge has a cylindrical inner high voltage eleode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  16. Filter design for directional multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Cunha, Arthur L.; Do, Minh N.

    2005-08-01

    In this paper we discuss recent developments on design tools and methods for multidimensional filter banks in the context of directional multiresolution representations. Due to the inherent non-separability of the filters and the lack of multi-dimensional factorization tools, one generally has to overcome factorization by indirect methods. One such method is the mapping technique. In the context of contourlets we review methods for designing filters with directional vanishing moments (DVM). The DVM property is crucial in guaranteeing the non-linear approximation efficacy of contourlets. Our approach allows for easy design of two-channel linear-phase filter banks with DVM of any order. Next we study the design via mapping of nonsubsampled filter banks. Our methodology allows for a fast implementation through ladder steps. The proposed design is then used to construct the nonsubsampled contourlet transform which is particularly efficiently in image denoising, as experiments in this paper show.

  17. Metal-dielectric metameric filters for optically variable devices

    NASA Astrophysics Data System (ADS)

    Xiao, Lixiang; Chen, Nan; Deng, Zihao; Wang, Xiaozhong; Guo, Rong; Bu, Yikun

    2016-01-01

    A pair of metal-dielectric metameric filters that could create a hidden image was presented for the first time. The structure of the filters is simple and only six layers for filter A and five layers for filter B. The prototype filters were designed by using the film color target optimization method and the designed results show that, at normal observation angle, the reflected colors of the pair of filters are both green and the color difference index between them is only 0.9017. At observation angle of 60°, the filter A is violet and the filter B is blue. The filters were fabricated by remote plasma sputtering process and the experimental results were in accordance with the designs.

  18. Recent progress in plasmonic colour filters for image sensor and multispectral applications

    NASA Astrophysics Data System (ADS)

    Pinton, Nadia; Grant, James; Choubey, Bhaskar; Cumming, David; Collins, Steve

    2016-04-01

    Using nanostructured thin metal films as colour filters offers several important advantages, in particular high tunability across the entire visible spectrum and some of the infrared region, and also compatibility with conventional CMOS processes. Since 2003, the field of plasmonic colour filters has evolved rapidly and several different designs and materials, or combination of materials, have been proposed and studied. In this paper we present a simulation study for a single- step lithographically patterned multilayer structure able to provide competitive transmission efficiencies above 40% and contemporary FWHM of the order of 30 nm across the visible spectrum. The total thickness of the proposed filters is less than 200 nm and is constant for every wavelength, unlike e.g. resonant cavity-based filters such as Fabry-Perot that require a variable stack of several layers according to the working frequency, and their passband characteristics are entirely controlled by changing the lithographic pattern. It will also be shown that a key to obtaining narrow-band optical response lies in the dielectric environment of a nanostructure and that it is not necessary to have a symmetric structure to ensure good coupling between the SPPs at the top and bottom interfaces. Moreover, an analytical method to evaluate the periodicity, given a specific structure and a desirable working wavelength, will be proposed and its accuracy demonstrated. This method conveniently eliminate the need to optimize the design of a filter numerically, i.e. by running several time-consuming simulations with different periodicities.

  19. A Filtering Method For Gravitationally Stratified Flows

    SciTech Connect

    Gatti-Bono, Caroline; Colella, Phillip

    2005-04-25

    Gravity waves arise in gravitationally stratified compressible flows at low Mach and Froude numbers. These waves can have a negligible influence on the overall dynamics of the fluid but, for numerical methods where the acoustic waves are treated implicitly, they impose a significant restriction on the time step. A way to alleviate this restriction is to filter out the modes corresponding to the fastest gravity waves so that a larger time step can be used. This paper presents a filtering strategy of the fully compressible equations based on normal mode analysis that is used throughout the simulation to compute the fast dynamics and that is able to damp only fast gravity modes.

  20. ARRANGEMENT FOR REPLACING FILTERS

    DOEpatents

    Blomgren, R.A.; Bohlin, N.J.C.

    1957-08-27

    An improved filtered air exhaust system which may be continually operated during the replacement of the filters without the escape of unfiltered air is described. This is accomplished by hermetically sealing the box like filter containers in a rectangular tunnel with neoprene covered sponge rubber sealing rings coated with a silicone impregnated pneumatic grease. The tunnel through which the filters are pushed is normal to the exhaust air duct. A number of unused filters are in line behind the filters in use, and are moved by a hydraulic ram so that a fresh filter is positioned in the air duct. The used filter is pushed into a waiting receptacle and is suitably disposed. This device permits a rapid and safe replacement of a radiation contaminated filter without interruption to the normal flow of exhaust air.

  1. Effects of electron beam irradiation of cellulose acetate cigarette filters

    NASA Astrophysics Data System (ADS)

    Czayka, M.; Fisch, M.

    2012-07-01

    A method to reduce the molecular weight of cellulose acetate used in cigarette filters by using electron beam irradiation is demonstrated. Radiation levels easily obtained with commercially available electron accelerators result in a decrease in average molecular weight of about six-times with no embrittlement, or significant change in the elastic behavior of the filter. Since a first step in the biodegradation of cigarette filters is reduction in the filter material's molecular weight this invention has the potential to allow the production of significantly faster degrading filters.

  2. Method of securing filter elements

    DOEpatents

    Brown, Erik P.; Haslam, Jeffery L.; Mitchell, Mark A.

    2016-10-04

    A filter securing system including a filter unit body housing; at least one tubular filter element positioned in the filter unit body housing, the tubular filter element having a closed top and an open bottom; a dimple in either the filter unit body housing or the top of the tubular filter element; and a socket in either the filter unit body housing or the top of the tubular filter element that receives the dimple in either the filter unit body housing or the top of the tubular filter element to secure the tubular filter element to the filter unit body housing.

  3. Stochastic Vorticity and Associated Filtering Theory

    SciTech Connect

    Amirdjanova, A.; Kallianpur, G.

    2002-12-19

    The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.

  4. A Novel Design Approach for Contourlet Filter Banks

    NASA Astrophysics Data System (ADS)

    Yang, Guoan; van de Wetering, Huub; Hou, Ming; Ikuta, Chihiro; Liu, Yuehu

    This letter proposes a novel design approach for optimal contourlet filter banks based on the parametric 9/7 filter family. The Laplacian pyramid decomposition is replaced by optimal 9/7 filter banks with rational coefficients, and directional filter banks are activated using a pkva 12 filter in the contourlets. Moreover, based on this optimal 9/7 filter, we present an image denoising approach using a contourlet domain hidden Markov tree model. Finally, experimental results show that our approach in denoising images with texture detail is only 0.20dB less compared to the method of Po and Do, and the visual quality is as good as for their method. Compared with the method of Po and Do, our approach has lower computational complexity and is more suitable for VLSI hardware implementation.

  5. Laboratory comparison of continuous vs. binary phase-mostly filters

    NASA Technical Reports Server (NTRS)

    Monroe, Stanley E., Jr.; Knopp, Jerome; Juday, Richard D.

    1989-01-01

    Recent developments in spatial light modulators have led to devices which are capable of continuous phase modulation, even if only over a limited range. One of these devices, the deformable mirror device is used, to compare the relative merits of binary and partially-continuous phase filters in a specific problem of pattern recognition by optical correlation. Each filter was physically limited to only about a radiation of modulation. Researchers have predicted that for low input noise levels, continuous phase-only filters should have a higher absolute correlator peak output than the corresponding binary filters, as well as having a larger SNR. When continuous and binary filters were implemented on the DMD and they exhibited the same performance; an ad hoc filter optimization procedure was developed for use in the laboratory. The optimized continuous filter gave higher correlation peaks than did an independently optimized binary filter. Background behavior in the correlation plane was similar for the two filters, and thus the SNR showed the same improvement for the continuous filter. A phasor diagram analysis and computer simulation have explained part of the optimization procedure's success.

  6. Rigid porous filter

    DOEpatents

    Chiang, Ta-Kuan; Straub, Douglas L.; Dennis, Richard A.

    2000-01-01

    The present invention involves a porous rigid filter including a plurality of concentric filtration elements having internal flow passages and forming external flow passages there between. The present invention also involves a pressure vessel containing the filter for the removal of particulates from high pressure particulate containing gases, and further involves a method for using the filter to remove such particulates. The present filter has the advantage of requiring fewer filter elements due to the high surface area-to-volume ratio provided by the filter, requires a reduced pressure vessel size, and exhibits enhanced mechanical design properties, improved cleaning properties, configuration options, modularity and ease of fabrication.

  7. Nearest matched filter classification of spatiotemporal patterns.

    PubMed

    Hecht-Nielsen, R

    1987-05-15

    Recent advances in massively parallel optical and electronic neural network processing technology have made it plausible to consider the use of matched filter banks containing large numbers of individual filters as pattern classifiers for complex spatiotemporal pattern environments such as speech, sonar, radar, and advanced communications. This paper begins with an overview of how neural networks can be used to approximately implement such multidimensional matched filter banks. The nearest matched filter classifier is then formally defined. This definition is then reformulated to show that the classifier is equivalent to a nearest neighbor classifier in a separable infinite-dimensional metric space that specifies the local-in-time behavior of spatiotemporal patterns. The result of Cover and Hart is then applied to show that, given a statistically comprehensive set of filter templates, the nearest matched filter classifier will have near-Bayesian performance for spatiotemporal patterns. The combination of near-Bayesian classifier performance with the excellent performance of matched filtering in noise yields a powerful new classification technique. This result adds additional interest to Grossberg's hypothesis that the mammalian cerebral cortex carries out local-in-time nearest matched filter classification of both auditory and visual sensory inputs as an initial step in sensory pattern recognition-which may help explain the almost instantaneous pattern recognition capabilities of animals.

  8. Filter type gas sampler with filter consolidation

    DOEpatents

    Miley, Harry S.; Thompson, Robert C.; Hubbard, Charles W.; Perkins, Richard W.

    1997-01-01

    Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, whereafter the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant.

  9. Filter type gas sampler with filter consolidation

    DOEpatents

    Miley, H.S.; Thompson, R.C.; Hubbard, C.W.; Perkins, R.W.

    1997-03-25

    Disclosed is an apparatus for automatically consolidating a filter or, more specifically, an apparatus for drawing a volume of gas through a plurality of sections of a filter, where after the sections are subsequently combined for the purpose of simultaneously interrogating the sections to detect the presence of a contaminant. 5 figs.

  10. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.

  11. Constrained optimization of image restoration filters

    NASA Technical Reports Server (NTRS)

    Riemer, T. E.; Mcgillem, C. D.

    1973-01-01

    A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.

  12. HEPA Filter Vulnerability Assessment

    SciTech Connect

    GUSTAVSON, R.D.

    2000-05-11

    This assessment of High Efficiency Particulate Air (HEPA) filter vulnerability was requested by the USDOE Office of River Protection (ORP) to satisfy a DOE-HQ directive to evaluate the effect of filter degradation on the facility authorization basis assumptions. Within the scope of this assessment are ventilation system HEPA filters that are classified as Safety-Class (SC) or Safety-Significant (SS) components that perform an accident mitigation function. The objective of the assessment is to verify whether HEPA filters that perform a safety function during an accident are likely to perform as intended to limit release of hazardous or radioactive materials, considering factors that could degrade the filters. Filter degradation factors considered include aging, wetting of filters, exposure to high temperature, exposure to corrosive or reactive chemicals, and exposure to radiation. Screening and evaluation criteria were developed by a site-wide group of HVAC engineers and HEPA filter experts from published empirical data. For River Protection Project (RPP) filters, the only degradation factor that exceeded the screening threshold was for filter aging. Subsequent evaluation of the effect of filter aging on the filter strength was conducted, and the results were compared with required performance to meet the conditions assumed in the RPP Authorization Basis (AB). It was found that the reduction in filter strength due to aging does not affect the filter performance requirements as specified in the AB. A portion of the HEPA filter vulnerability assessment is being conducted by the ORP and is not part of the scope of this study. The ORP is conducting an assessment of the existing policies and programs relating to maintenance, testing, and change-out of HEPA filters used for SC/SS service. This document presents the results of a HEPA filter vulnerability assessment conducted for the River protection project as requested by the DOE Office of River Protection.

  13. Cordierite silicon nitride filters

    SciTech Connect

    Sawyer, J.; Buchan, B. ); Duiven, R.; Berger, M. ); Cleveland, J.; Ferri, J. )

    1992-02-01

    The objective of this project was to develop a silicon nitride based crossflow filter. This report summarizes the findings and results of the project. The project was phased with Phase I consisting of filter material development and crossflow filter design. Phase II involved filter manufacturing, filter testing under simulated conditions and reporting the results. In Phase I, Cordierite Silicon Nitride (CSN) was developed and tested for permeability and strength. Target values for each of these parameters were established early in the program. The values were met by the material development effort in Phase I. The crossflow filter design effort proceeded by developing a macroscopic design based on required surface area and estimated stresses. Then the thermal and pressure stresses were estimated using finite element analysis. In Phase II of this program, the filter manufacturing technique was developed, and the manufactured filters were tested. The technique developed involved press-bonding extruded tiles to form a filter, producing a monolithic filter after sintering. Filters manufactured using this technique were tested at Acurex and at the Westinghouse Science and Technology Center. The filters did not delaminate during testing and operated and high collection efficiency and good cleanability. Further development in areas of sintering and filter design is recommended.

  14. Analysis and Design of Time-Varying Filter Banks

    NASA Astrophysics Data System (ADS)

    Sodagar, Iraj

    Analysis-synthesis filter banks have been studied extensively and a wide range of theoretical problems have been subsequently addressed. However, almost all the research activity has been concentrated on time-invariant filter banks whose components are fixed and do not change in time. The objective of this thesis is to develop analysis and design techniques for time-varying FIR analysis-synthesis filter banks that are perfect reconstructing (PR). In such systems, the analysis and/or synthesis filters, the down-up sampling rates, or even the number of bands can change in time. The underlying idea is that by adapting the basis functions of the filter bank transform to the signal properties, one can represent the relevant information of the signal more efficiently. For analysis purposes, we derive the time-varying impulse response of the filter bank in terms of the analysis and synthesis filter coefficients. We are able to represent this impulse response in terms of the product of the analysis and synthesis matrix transforms. Our approach to the PR time-varying filter bank design is to change the analysis -synthesis filter bank among a set of time-invariant filter banks. The analysis filter banks are switched instantaneously. To eliminate the distortion during switching, a new time-varying synthesis section is designed for each transition. Three design techniques are developed for the time-varying filter bank design. The first technique uses the least squares synthesis filters. This method improves the reconstruction quality significantly, but does not usually achieve the perfect reconstruction. Using the second technique, one can design PR time-varying systems by redesigning the analysis filters. The drawback is that this method requires numerical optimizations. The third technique introduces a new structure for exactly reconstructing time-varying filter banks. This structure consists of the conventional filter bank followed by a time-varying post filter. The post

  15. HEPA filter monitoring program

    NASA Astrophysics Data System (ADS)

    Kirchner, K. N.; Johnson, C. M.; Aiken, W. F.; Lucerna, J. J.; Barnett, R. L.; Jensen, R. T.

    1986-07-01

    The testing and replacement of HEPA filters, widely used in the nuclear industry to purify process air, are costly and labor-intensive. Current methods of testing filter performance, such as differential pressure measurement and scanning air monitoring, allow determination of overall filter performance but preclude detection of incipient filter failure such as small holes in the filters. Using current technology, a continual in-situ monitoring system was designed which provides three major improvements over current methods of filter testing and replacement. The improvements include: cost savings by reducing the number of intact filters which are currently being replaced unnecessarily; more accurate and quantitative measurement of filter performance; and reduced personnel exposure to a radioactive environment by automatically performing most testing operations.

  16. Bag filters for TPP

    SciTech Connect

    L.V. Chekalov; Yu.I. Gromov; V.V. Chekalov

    2007-05-15

    Cleaning of TPP flue gases with bag filters capable of pulsed regeneration is examined. A new filtering element with a three-dimensional filtering material formed from a needle-broached cloth in which the filtration area, as compared with a conventional smooth bag, is increased by more than two times, is proposed. The design of a new FRMI type of modular filter is also proposed. A standard series of FRMI filters with a filtration area ranging from 800 to 16,000 m{sup 2} is designed for an output more than 1 million m{sub 3}/h of with respect to cleaned gas. The new bag filter permits dry collection of sulfur oxides from waste gases at TPP operating on high-sulfur coals. The design of the filter makes it possible to replace filter elements without taking the entire unit out of service.

  17. Novel Backup Filter Device for Candle Filters

    SciTech Connect

    Bishop, B.; Goldsmith, R.; Dunham, G.; Henderson, A.

    2002-09-18

    The currently preferred means of particulate removal from process or combustion gas generated by advanced coal-based power production processes is filtration with candle filters. However, candle filters have not shown the requisite reliability to be commercially viable for hot gas clean up for either integrated gasifier combined cycle (IGCC) or pressurized fluid bed combustion (PFBC) processes. Even a single candle failure can lead to unacceptable ash breakthrough, which can result in (a) damage to highly sensitive and expensive downstream equipment, (b) unacceptably low system on-stream factor, and (c) unplanned outages. The U.S. Department of Energy (DOE) has recognized the need to have fail-safe devices installed within or downstream from candle filters. In addition to CeraMem, DOE has contracted with Siemens-Westinghouse, the Energy & Environmental Research Center (EERC) at the University of North Dakota, and the Southern Research Institute (SRI) to develop novel fail-safe devices. Siemens-Westinghouse is evaluating honeycomb-based filter devices on the clean-side of the candle filter that can operate up to 870 C. The EERC is developing a highly porous ceramic disk with a sticky yet temperature-stable coating that will trap dust in the event of filter failure. SRI is developing the Full-Flow Mechanical Safeguard Device that provides a positive seal for the candle filter. Operation of the SRI device is triggered by the higher-than-normal gas flow from a broken candle. The CeraMem approach is similar to that of Siemens-Westinghouse and involves the development of honeycomb-based filters that operate on the clean-side of a candle filter. The overall objective of this project is to fabricate and test silicon carbide-based honeycomb failsafe filters for protection of downstream equipment in advanced coal conversion processes. The fail-safe filter, installed directly downstream of a candle filter, should have the capability for stopping essentially all particulate

  18. MST Filterability Tests

    SciTech Connect

    Poirier, M. R.; Burket, P. R.; Duignan, M. R.

    2015-03-12

    The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). The low filter flux through the ARP has limited the rate at which radioactive liquid waste can be treated. Recent filter flux has averaged approximately 5 gallons per minute (gpm). Salt Batch 6 has had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. In addition, at the time the testing started, SRR was assessing the impact of replacing the 0.1 micron filter with a 0.5 micron filter. This report describes testing of MST filterability to investigate the impact of filter pore size and MST particle size on filter flux and testing of filter enhancers to attempt to increase filter flux. The authors constructed a laboratory-scale crossflow filter apparatus with two crossflow filters operating in parallel. One filter was a 0.1 micron Mott sintered SS filter and the other was a 0.5 micron Mott sintered SS filter. The authors also constructed a dead-end filtration apparatus to conduct screening tests with potential filter aids and body feeds, referred to as filter enhancers. The original baseline for ARP was 5.6 M sodium salt solution with a free hydroxide concentration of approximately 1.7 M.3 ARP has been operating with a sodium concentration of approximately 6.4 M and a free hydroxide concentration of approximately 2.5 M. SRNL conducted tests varying the concentration of sodium and free hydroxide to determine whether those changes had a significant effect on filter flux. The feed slurries for the MST filterability tests were composed of simple salts (NaOH, NaNO2, and NaNO3) and MST (0.2 – 4.8 g/L). The feed slurry for the filter enhancer tests contained simulated salt batch 6 supernate, MST, and filter enhancers.

  19. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  20. Survey of digital filtering

    NASA Technical Reports Server (NTRS)

    Nagle, H. T., Jr.

    1972-01-01

    A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.

  1. Acoustic bandpass filters employing shaped resonators

    NASA Astrophysics Data System (ADS)

    Červenka, M.; Bednařík, M.

    2016-11-01

    This work deals with acoustic bandpass filters realized by shaped waveguide-elements inserted between two parts of an acoustic transmission line with generally different characteristic impedance. It is shown that the formation of a wide passband is connected with the eigenfrequency spectrum of the filter element which acts as an acoustic resonator and that the required filter shape substantially depends on whether the filter characteristic impedance is higher or lower than the characteristic impedance of the waveguide. It is further shown that this class of filters can be realized even without the need of different characteristic impedance. A heuristic technique is proposed to design filter shapes with required transmission properties; it is employed for optimization of low-frequency bandpass filters as well as for design of bandpass filters with wide passband surrounded by wide stopbands as it is typical for phononic crystals, however, in this case the arrangement is much simpler as it consists of only one simple-shaped homogeneous element.

  2. Filter service system

    DOEpatents

    Sellers, Cheryl L.; Nordyke, Daniel S.; Crandell, Richard A.; Tomlins, Gregory; Fei, Dong; Panov, Alexander; Lane, William H.; Habeger, Craig F.

    2008-12-09

    According to an exemplary embodiment of the present disclosure, a system for removing matter from a filtering device includes a gas pressurization assembly. An element of the assembly is removably attachable to a first orifice of the filtering device. The system also includes a vacuum source fluidly connected to a second orifice of the filtering device.

  3. Practical Active Capacitor Filter

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr. (Inventor)

    2005-01-01

    A method and apparatus is described that filters an electrical signal. The filtering uses a capacitor multiplier circuit where the capacitor multiplier circuit uses at least one amplifier circuit and at least one capacitor. A filtered electrical signal results from a direct connection from an output of the at least one amplifier circuit.

  4. HEPA filter encapsulation

    DOEpatents

    Gates-Anderson, Dianne D.; Kidd, Scott D.; Bowers, John S.; Attebery, Ronald W.

    2003-01-01

    A low viscosity resin is delivered into a spent HEPA filter or other waste. The resin is introduced into the filter or other waste using a vacuum to assist in the mass transfer of the resin through the filter media or other waste.

  5. Stepping motor controller

    DOEpatents

    Bourret, S.C.; Swansen, J.E.

    1982-07-02

    A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  6. Stepping motor controller

    DOEpatents

    Bourret, Steven C.; Swansen, James E.

    1984-01-01

    A stepping motor is microprocessingly controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  7. Step-Growth Polymerization.

    ERIC Educational Resources Information Center

    Stille, J. K.

    1981-01-01

    Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)

  8. [Are additional filters made of niobium superior to the copper filter in dental radiology?].

    PubMed

    Cordt, I; Engelke, W

    1990-01-01

    At a dental X-ray unit the effect of an additional filter made of niobium has been tested in the molar region of the mandible with respect to dose reduction. It has been compared against the effect of an additional filter made of copper. With regard to the same dose at the film, radiation dose at the surface of the patient proved to be slightly more reduced after application of the copper filter than after application of the niobium filter. Radiographs have been made by exposure of intraoral dental films (Kodak film Ultra Speed D) together with a hydroxyapatite step-wedge. Measurements of optical density resulted in the same values after application of the copper filter and the niobium filter, respectively. Reduced image contrast due to application of one of these additional filters proved to be helpful. In short, an additional copper filter placed in the X-ray beam shows identical or better results when compared against an additional filter made of niobium. PMID:2237349

  9. Bayesian filtering in electronic surveillance

    NASA Astrophysics Data System (ADS)

    Coraluppi, Stefano; Carthel, Craig

    2012-06-01

    Fusion of passive electronic support measures (ESM) with active radar data enables tracking and identification of platforms in air, ground, and maritime domains. An effective multi-sensor fusion architecture adopts hierarchical real-time multi-stage processing. This paper focuses on the recursive filtering challenges. The first challenge is to achieve effective platform identification based on noisy emitter type measurements; we show that while optimal processing is computationally infeasible, a good suboptimal solution is available via a sequential measurement processing approach. The second challenge is to process waveform feature measurements that enable disambiguation in multi-target scenarios where targets may be using the same emitters. We show that an approach that explicitly considers the Markov jump process outperforms the traditional Kalman filtering solution.

  10. The intractable cigarette ‘filter problem’

    PubMed Central

    2011-01-01

    Background When lung cancer fears emerged in the 1950s, cigarette companies initiated a shift in cigarette design from unfiltered to filtered cigarettes. Both the ineffectiveness of cigarette filters and the tobacco industry's misleading marketing of the benefits of filtered cigarettes have been well documented. However, during the 1950s and 1960s, American cigarette companies spent millions of dollars to solve what the industry identified as the ‘filter problem’. These extensive filter research and development efforts suggest a phase of genuine optimism among cigarette designers that cigarette filters could be engineered to mitigate the health hazards of smoking. Objective This paper explores the early history of cigarette filter research and development in order to elucidate why and when seemingly sincere filter engineering efforts devolved into manipulations in cigarette design to sustain cigarette marketing and mitigate consumers' concerns about the health consequences of smoking. Methods Relevant word and phrase searches were conducted in the Legacy Tobacco Documents Library online database, Google Patents, and media and medical databases including ProQuest, JSTOR, Medline and PubMed. Results 13 tobacco industry documents were identified that track prominent developments involved in what the industry referred to as the ‘filter problem’. These reveal a period of intense focus on the ‘filter problem’ that persisted from the mid-1950s to the mid-1960s, featuring collaborations between cigarette producers and large American chemical and textile companies to develop effective filters. In addition, the documents reveal how cigarette filter researchers' growing scientific knowledge of smoke chemistry led to increasing recognition that filters were unlikely to offer significant health protection. One of the primary concerns of cigarette producers was to design cigarette filters that could be economically incorporated into the massive scale of cigarette

  11. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  12. Gabor filter based fingerprint image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  13. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  14. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  15. Regenerative particulate filter development

    NASA Technical Reports Server (NTRS)

    Descamp, V. A.; Boex, M. W.; Hussey, M. W.; Larson, T. P.

    1972-01-01

    Development, design, and fabrication of a prototype filter regeneration unit for regenerating clean fluid particle filter elements by using a backflush/jet impingement technique are reported. Development tests were also conducted on a vortex particle separator designed for use in zero gravity environment. A maintainable filter was designed, fabricated and tested that allows filter element replacement without any leakage or spillage of system fluid. Also described are spacecraft fluid system design and filter maintenance techniques with respect to inflight maintenance for the space shuttle and space station.

  16. Ceramic fiber filter technology

    SciTech Connect

    Holmes, B.L.; Janney, M.A.

    1996-06-01

    Fibrous filters have been used for centuries to protect individuals from dust, disease, smoke, and other gases or particulates. In the 1970s and 1980s ceramic filters were developed for filtration of hot exhaust gases from diesel engines. Tubular, or candle, filters have been made to remove particles from gases in pressurized fluidized-bed combustion and gasification-combined-cycle power plants. Very efficient filtration is necessary in power plants to protect the turbine blades. The limited lifespan of ceramic candle filters has been a major obstacle in their development. The present work is focused on forming fibrous ceramic filters using a papermaking technique. These filters are highly porous and therefore very lightweight. The papermaking process consists of filtering a slurry of ceramic fibers through a steel screen to form paper. Papermaking and the selection of materials will be discussed, as well as preliminary results describing the geometry of papers and relative strengths.

  17. Use of astronomy filters in fluorescence microscopy.

    PubMed

    Piper, Jörg

    2012-02-01

    Monochrome astronomy filters are well suited for use as excitation or suppression filters in fluorescence microscopy. Because of their particular optical design, such filters can be combined with standard halogen light sources for excitation in many fluorescent probes. In this "low energy excitation," photobleaching (fading) or other irritations of native specimens are avoided. Photomicrographs can be taken from living motile fluorescent specimens also with a flash so that fluorescence images can be created free from indistinctness caused by movement. Special filter cubes or dichroic mirrors are not needed for our method. By use of suitable astronomy filters, fluorescence microscopy can be carried out with standard laboratory microscopes equipped with condensers for bright-field (BF) and dark-field (DF) illumination in transmitted light. In BF excitation, the background brightness can be modulated in tiny steps up to dark or black. Moreover, standard industry microscopes fitted with a vertical illuminator for examinations of opaque probes in DF or BF illumination based on incident light (wafer inspections, for instance) can also be used for excitation in epi-illumination when adequate astronomy filters are inserted as excitatory and suppression filters in the illuminating and imaging light path. In all variants, transmission bands can be modulated by transmission shift.

  18. Truncation correction for oblique filtering lines

    SciTech Connect

    Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Guenter; Dennerlein, Frank; Noo, Frederic

    2008-12-15

    State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.

  19. Truncation correction for oblique filtering lines.

    PubMed

    Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Günter; Dennerlein, Frank; Noo, Frédéric

    2008-12-01

    State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.

  20. VSP wave separation by adaptive masking filters

    NASA Astrophysics Data System (ADS)

    Rao, Ying; Wang, Yanghua

    2016-06-01

    In vertical seismic profiling (VSP) data processing, the first step might be to separate the down-going wavefield from the up-going wavefield. When using a masking filter for VSP wave separation, there are difficulties associated with two termination ends of the up-going waves. A critical challenge is how the masking filter can restore the energy tails, the edge effect associated with these terminations uniquely exist in VSP data. An effective strategy is to implement masking filters in both τ-p and f-k domain sequentially. Meanwhile it uses a median filter, producing a clean but smooth version of the down-going wavefield, used as a reference data set for designing the masking filter. The masking filter is implemented adaptively and iteratively, gradually restoring the energy tails cut-out by any surgical mute. While the τ-p and the f-k domain masking filters target different depth ranges of VSP, this combination strategy can accurately perform in wave separation from field VSP data.

  1. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  2. A superior edge preserving filter with a systematic analysis

    NASA Technical Reports Server (NTRS)

    Holladay, Kenneth W.; Rickman, Doug

    1991-01-01

    A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data.

  3. Stepped frequency ground penetrating radar

    DOEpatents

    Vadnais, Kenneth G.; Bashforth, Michael B.; Lewallen, Tricia S.; Nammath, Sharyn R.

    1994-01-01

    A stepped frequency ground penetrating radar system is described comprising an RF signal generating section capable of producing stepped frequency signals in spaced and equal increments of time and frequency over a preselected bandwidth which serves as a common RF signal source for both a transmit portion and a receive portion of the system. In the transmit portion of the system the signal is processed into in-phase and quadrature signals which are then amplified and then transmitted toward a target. The reflected signals from the target are then received by a receive antenna and mixed with a reference signal from the common RF signal source in a mixer whose output is then fed through a low pass filter. The DC output, after amplification and demodulation, is digitized and converted into a frequency domain signal by a Fast Fourier Transform. A plot of the frequency domain signals from all of the stepped frequencies broadcast toward and received from the target yields information concerning the range (distance) and cross section (size) of the target.

  4. Compact planar microwave blocking filters

    NASA Technical Reports Server (NTRS)

    U-Yen, Kongpop (Inventor); Wollack, Edward J. (Inventor)

    2012-01-01

    A compact planar microwave blocking filter includes a dielectric substrate and a plurality of filter unit elements disposed on the substrate. The filter unit elements are interconnected in a symmetrical series cascade with filter unit elements being organized in the series based on physical size. In the filter, a first filter unit element of the plurality of filter unit elements includes a low impedance open-ended line configured to reduce the shunt capacitance of the filter.

  5. Precise dispersion equations of absorbing filter glasses

    NASA Astrophysics Data System (ADS)

    Reichel, S.; Biertümpfel, Ralf

    2014-05-01

    The refractive indices versus wavelength of optical transparent glasses are measured at a few wavelengths only. In order to calculate the refractive index at any wavelength, a so-called Sellmeier series is used as an approximation of the wavelength dependent refractive index. Such a Sellmeier representation assumes an absorbing free (= loss less) material. In optical transparent glasses this assumption is valid since the absorption of such transparent glasses is very low. However, optical filter glasses have often a rather high absorbance in certain regions of the spectrum. The exact description of the wavelength dependent function of the refractive index is essential for an optimized design for sophisticated optical applications. Digital cameras use an IR cut filter to ensure good color rendition and image quality. In order to reduce ghost images by reflections and to be nearly angle independent absorbing filter glass is used, e.g. blue glass BG60 from SCHOTT. Nowadays digital cameras improve their performance and so the IR cut filter needs to be improved and thus the accurate knowledge of the refractive index (dispersion) of the used glasses must be known. But absorbing filter glass is not loss less as needed for a Sellmeier representation. In addition it is very difficult to measure it in the absorption region of the filter glass. We have focused a lot of effort on measuring the refractive index at specific wavelength for absorbing filter glass - even in the absorption region. It will be described how to do such a measurement. In addition we estimate the use of a Sellmeier representation for filter glasses. It turns out that in most cases a Sellmeier representation can be used even for absorbing filter glasses. Finally Sellmeier coefficients for the approximation of the refractive index will be given for different filter glasses.

  6. Step by Step: Avoiding Spiritual Bypass in 12-Step Work

    ERIC Educational Resources Information Center

    Cashwell, Craig S.; Clarke, Philip B.; Graves, Elizabeth G.

    2009-01-01

    With spirituality as a cornerstone, 12-step groups serve a vital role in the recovery community. It is important for counselors to be mindful, however, of the potential for clients to be in spiritual bypass, which likely will undermine the recovery process.

  7. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  8. Spin-filtering at COSY

    NASA Astrophysics Data System (ADS)

    Weidemann, Christian; PAX Collaboration

    2011-05-01

    The Spin Filtering experiments at COSY and AD at CERN within the framework of the Polarized Antiproton EXperiments (PAX) are proposed to determine the spin-dependent cross sections in bar pp scattering by observation of the buildup of polarization of an initially unpolarized stored antiproton beam after multiple passage through an internal polarized gas target. In order to commission the experimental setup for the AD and to understand the relevant machine parameters spin-filtering will first be done with protons at COSY. A first major step toward this goal has been achieved with the installation of the required mini-β section in summer 2009 and it's commissioning in January 2010. The target chamber together with the atomic beam source and the so-called Breit-Rabi polarimeter have been installed and commissioned in summer 2010. In addition an openable storage cell has been used. It provides a target thickness of 5·1013 atoms/cm2. We report on the status of spin-filtering experiments at COSY and the outcome of a recent beam time including studies on beam lifetime limitations like intra-beam scattering and the electron-cooling performance as well as machine acceptance studies.

  9. 2-Step IMAT and 2-Step IMRT in three dimensions

    SciTech Connect

    Bratengeier, Klaus

    2005-12-15

    In two dimensions, 2-Step Intensity Modulated Arc Therapy (2-Step IMAT) and 2-Step Intensity Modulated Radiation Therapy (IMRT) were shown to be powerful methods for the optimization of plans with organs at risk (OAR) (partially) surrounded by a target volume (PTV). In three dimensions, some additional boundary conditions have to be considered to establish 2-Step IMAT as an optimization method. A further aim was to create rules for ad hoc adaptations of an IMRT plan to a daily changing PTV-OAR constellation. As a test model, a cylindrically symmetric PTV-OAR combination was used. The centrally placed OAR can adapt arbitrary diameters with different gap widths toward the PTV. Along the rotation axis the OAR diameter can vary, the OAR can even vanish at some axis positions, leaving a circular PTV. The width and weight of the second segment were the free parameters to optimize. The objective function f to minimize was the root of the integral of the squared difference of the dose in the target volume and a reference dose. For the problem, two local minima exist. Therefore, as a secondary criteria, the magnitude of hot and cold spots were taken into account. As a result, the solution with a larger segment width was recommended. From plane to plane for varying radii of PTV and OAR and for different gaps between them, different sets of weights and widths were optimal. Because only one weight for one segment shall be used for all planes (respectively leaf pairs), a strategy for complex three-dimensional (3-D) cases was established to choose a global weight. In a second step, a suitable segment width was chosen, minimizing f for this global weight. The concept was demonstrated in a planning study for a cylindrically symmetric example with a large range of different radii of an OAR along the patient axis. The method is discussed for some classes of tumor/organ at risk combinations. Noncylindrically symmetric cases were treated exemplarily. The product of width and weight of

  10. Optimization of Pilot Point Locations: an efficient and geostatistical perspective

    NASA Astrophysics Data System (ADS)

    Mehne, J.; Nowak, W.

    2012-04-01

    The pilot point method is a wide-spread method for calibrating ensembles of heterogeneous aquifer models on available field data such as hydraulic heads. The pilot points are virtual measurements of conductivity, introduced as localized carriers of information in the inverse procedure. For each heterogeneous aquifer realization, the pilot point values are calibrated until all calibration data are honored. Adequate placement and numbers of pilot points are crucial both for accurate representation of heterogeneity and to keep the computational costs of calibration at an acceptable level. Current placement methods for pilot points either rely solely on the expertise of the modeler, or they involve computationally costly sensitivity analyses. None of the existing placement methods directly addressed the geostatistical character of the placement and calibration problem. This study presents a new method for optimal selection of pilot point locations. We combine ideas from Ensemble Kalman Filtering and geostatistical optimal design with straightforward optimization. In a first step, we emulate the pilot point method with a modified Ensemble Kalman Filter for parameter estimation at drastically reduced computational costs. This avoids the costly evaluation of sensitivity coefficients often used for optimal placement of pilot points. Second, we define task-driven objective functions for the optimal placement of pilot points, based on ideas from geostatistical optimal design of experiments. These objective functions can be evaluated at speed, without carrying out the actual calibration process, requiring nothing else but ensemble covariances that are available from step one. By formal optimization, we can find pilot point placement schemes that are optimal in representing the data for the task-at-hand with minimal numbers of pilot points. In small synthetic test applications, we demonstrate the promising computational performance and the geostatistically logical choice of

  11. Optical filtering for star trackers

    NASA Technical Reports Server (NTRS)

    Wilson, R. E.

    1973-01-01

    The optimization of optical filtering was investigated for tracking faint stars, down to the fifth magnitude. The effective wavelength and bandwidth for tracking pre-selected guide stars are discussed along with the results of an all-electronic tracker with a star tracking photomultiplier, which was tested with a simulated second magnitude star. Tables which give the sum of zodiacal light and galactic background light over the entire sky for intervals of five degrees in declination, and twenty minutes in right ascension are included.

  12. Genetic algorithm used in interference filter's design

    NASA Astrophysics Data System (ADS)

    Li, Jinsong; Fang, Ying; Gao, Xiumin

    2009-11-01

    An approach for designing of interference filter is presented by using genetic algorithm (here after refer to as GA) here. We use GA to design band stop filter and narrow-band filter. Interference filter designed here can calculate the optimal reflectivity or transmission rate. Evaluation function used in our genetic algorithm is different from the others before. Using characteristic matrix to calculate the photonic band gap of one-dimensional photonic crystal is similar to electronic structure of doped. If the evaluation is sensitive to the deviation of photonic crystal structure, the approach by genetic algorithm is effective. A summary and explains towards some uncompleted issues are given at the end of this paper.

  13. Permanent versus Retrievable Inferior Vena Cava Filters: Rethinking the "One-Filter-for-All" Approach to Mechanical Thromboembolic Prophylaxis.

    PubMed

    Ghatan, Christine E; Ryu, Robert K

    2016-06-01

    Inferior vena cava (IVC) filtration for thromboembolic protection is not without risks, and there are important differences among commercially available IVC filters. While retrievable filters are approved for permanent implantation, they may be associated with higher device-related complications in the long term when compared with permanent filters. Prospective patient selection in determining which patients might be better served by permanent or retrievable filter devices is central to resource optimization, in addition to improved clinical follow-up and a concerted effort to retrieve filters when no longer needed. This article highlights the differences between permanent and retrievable devices, describes the interplay between these differences and the clinical indications for IVC filtration, advises against a "one-filter-for-all" approach to mechanical thromboembolic prophylaxis, and discusses strategies for optimizing personalized device selection.

  14. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  15. Generic Kalman Filter Software

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E., II; Crues, Edwin Z.

    2005-01-01

    The Generic Kalman Filter (GKF) software provides a standard basis for the development of application-specific Kalman-filter programs. Historically, Kalman filters have been implemented by customized programs that must be written, coded, and debugged anew for each unique application, then tested and tuned with simulated or actual measurement data. Total development times for typical Kalman-filter application programs have ranged from months to weeks. The GKF software can simplify the development process and reduce the development time by eliminating the need to re-create the fundamental implementation of the Kalman filter for each new application. The GKF software is written in the ANSI C programming language. It contains a generic Kalman-filter-development directory that, in turn, contains a code for a generic Kalman filter function; more specifically, it contains a generically designed and generically coded implementation of linear, linearized, and extended Kalman filtering algorithms, including algorithms for state- and covariance-update and -propagation functions. The mathematical theory that underlies the algorithms is well known and has been reported extensively in the open technical literature. Also contained in the directory are a header file that defines generic Kalman-filter data structures and prototype functions and template versions of application-specific subfunction and calling navigation/estimation routine code and headers. Once the user has provided a calling routine and the required application-specific subfunctions, the application-specific Kalman-filter software can be compiled and executed immediately. During execution, the generic Kalman-filter function is called from a higher-level navigation or estimation routine that preprocesses measurement data and post-processes output data. The generic Kalman-filter function uses the aforementioned data structures and five implementation- specific subfunctions, which have been developed by the user on

  16. The J-PAS filter system

    NASA Astrophysics Data System (ADS)

    Marin-Franch, Antonio; Taylor, Keith; Cenarro, Javier; Cristobal-Hornillos, David; Moles, Mariano

    2015-08-01

    J-PAS (Javalambre-PAU Astrophysical Survey) is a Spanish-Brazilian collaboration to conduct a narrow-band photometric survey of 8500 square degrees of northern sky using an innovative filter system of 59 filters, 56 relatively narrow-band (FWHM=14.5 nm) filters continuously populating the spectrum between 350 to 1000nm in 10nm steps, plus 3 broad-band filters. This filter system will be able to produce photometric redshifts with a precision of 0.003(1 + z) for Luminous Red Galaxies, allowing J-PAS to measure the radial scale of the Baryonic Acoustic Oscillations. The J-PAS survey will be carried out using JPCam, a 14-CCD mosaic camera using the new e2v 9k-by-9k, 10μm pixel, CCDs mounted on the JST/T250, a dedicated 2.55m wide-field telescope at the Observatorio Astrofísico de Javalambre (OAJ) near Teruel, Spain. The filters will operate in a fast (f/3.6) converging beam. The requirements for average transmissions greater than 85% in the passband, <10-5 blocking from 250 to 1050nm, steep bandpass edges and high image quality impose significant challenges for the production of the J-PAS filters that have demanded the development of new design solutions. This talk presents the J-PAS filter system and describes the most challenging requirements and adopted design strategies. Measurements and tests of the first manufactured filters are also presented.

  17. INEEL HEPA Filter Leach System: A Mixed Waste Solution

    SciTech Connect

    Argyle, Mark Don; Demmer, Ricky Lynn; Archibald, Kip Ernest; Brewer, Ken Neal; Pierson, Kenneth Alan; Shackelford, Kimberlee Rene; Kline, Kelli Suzanne

    1999-03-01

    Calciner operations and the fuel dissolution process at the Idaho National Engineering and Environmental Laboratory have generated many mixed waste high-efficiency particulate air (HEPA) filters. The HEPA Filter Leach System located at the Idaho Nuclear Technology and Engineering Center lowers radiation contamination levels and reduces cadmium, chromium, and mercury concentrations on spent HEPA filter media to below disposal limits set by the Resource Conservation and Recovery Act (RCRA). The treated HEPA filters are disposed as low-level radioactive waste. The technical basis for the existing system was established and optimized in initial studies using simulants in 1992. The treatment concept was validated for EPA approval in 1994 by leaching six New Waste Calcining Facility spent HEPA filters. Post-leach filter media sampling results for all six filters showed that both hazardous and radiological constituent levels were reduced so the filters could be disposed of as low-level radioactive waste. Since the validation tests the HEPA Filter Leach System has processed 78 filters in 1997 and 1998. The Idaho National Engineering and Environmental Laboratory HEPA Filter Leach System is the only mixed waste HEPA treatment system in the DOE complex. This process is of interest to many of the other DOE facilities and commercial companies that have generated mixed waste HEPA filters but currently do not have a treatment option available.

  18. INEEL HEPA Filter Leach System: A Mixed Waste Solution

    SciTech Connect

    K. Archibald; K. Brewer; K. Kline; K. Pierson; K. Shackelford; M. Argyle; R. Demmer

    1999-02-01

    Calciner operations and the fuel dissolution process at the Idaho National Engineering and Environmental Laboratory have generated many mixed waste high-efficiency particulate air (HEPA)filters. The HEPA Filter Leach System located at the Idaho Nuclear Technology and Engineering Center lowers radiation contamination levels and reduces cadmium, chromium, and mercury concentrations on spent HEPA filter media to below disposal limits set by the Resource Conservation and Recovery Act (RCRA). The treated HEPA filters are disposed as low-level radioactive waste. The technical basis for the existing system was established and optimized in initial studies using simulants in 1992. The treatment concept was validated for EPA approval in 1994 by leaching six New Waste Calcining Facility spent HEPA filters. Post-leach filter media sampling results for all six filters showed that both hazardous and radiological constituent levels were reduced so the filters could be disposed of as low-level radioactive waste. Since the validation tests the HEPA Filter Leach System has processed 78 filters in 1997 and 1998. The Idaho National Engineering and Environmental Laboratory HEPA Filter Leach System is the only mixed waste HEPA treatment system in the DOE complex. This process is of interest to many of the other DOE facilities and commercial companies that have generated mixed waste HEPA filters but currently do not have a treatment option available.

  19. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  20. Contactor/filter improvements

    DOEpatents

    Stelman, D.

    1988-06-30

    A contactor/filter arrangement for removing particulate contaminants from a gaseous stream is described. The filter includes a housing having a substantially vertically oriented granular material retention member with upstream and downstream faces, a substantially vertically oriented microporous gas filter element, wherein the retention member and the filter element are spaced apart to provide a zone for the passage of granular material therethrough. A gaseous stream containing particulate contaminants passes through the gas inlet means as well as through the upstream face of the granular material retention member, passing through the retention member, the body of granular material, the microporous gas filter element, exiting out of the gas outlet means. A cover screen isolates the filter element from contact with the moving granular bed. In one embodiment, the granular material is comprised of porous alumina impregnated with CuO, with the cover screen cleaned by the action of the moving granular material as well as by backflow pressure pulses. 6 figs.

  1. Concentric Split Flow Filter

    NASA Technical Reports Server (NTRS)

    Stapleton, Thomas J. (Inventor)

    2015-01-01

    A concentric split flow filter may be configured to remove odor and/or bacteria from pumped air used to collect urine and fecal waste products. For instance, filter may be designed to effectively fill the volume that was previously considered wasted surrounding the transport tube of a waste management system. The concentric split flow filter may be configured to split the air flow, with substantially half of the air flow to be treated traveling through a first bed of filter media and substantially the other half of the air flow to be treated traveling through the second bed of filter media. This split flow design reduces the air velocity by 50%. In this way, the pressure drop of filter may be reduced by as much as a factor of 4 as compare to the conventional design.

  2. Highly tunable microwave and millimeter wave filtering using photonic technology

    NASA Astrophysics Data System (ADS)

    Seregelyi, Joe; Lu, Ping; Paquet, Stéphane; Celo, Dritan; Mihailov, Stephen J.

    2015-05-01

    The design for a photonic microwave filter tunable in both bandwidth and operating frequency is proposed and experimentally demonstrated. The circuit is based on a single sideband modulator used in conjunction with two or more transmission fiber Bragg gratings (FBGs) cascaded in series. It is demonstrated that the optical filtering characteristics of the FBGs are instrumental in defining the shape of the microwave filter, and the numerical modeling was used to optimize these characteristics. A multiphase-shift transmission FBG design is used to increase the dynamic range of the filter, control the filter ripple, and maximize the slope of the filter skirts. Initial measurements confirmed the design theory and demonstrated a working microwave filter with a bandwidth tunable from approximately 2 to 3.5 GHz and an 18 GHz operating frequency tuning range. Further work is required to refine the FBG manufacturing process and reduce the impact of fabrication errors.

  3. Multicomponent seismic noise attenuation with multivariate order statistic filters

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yun; Wang, Xiaokai; Xun, Chao

    2016-10-01

    The vector relationship between multicomponent seismic data is highly important for multicomponent processing and interpretation, but this vector relationship could be damaged when each component is processed individually. To overcome the drawback of standard component-by-component filtering, multivariate order statistic filters are introduced and extended to attenuate the noise of multicomponent seismic data by treating such dataset as a vector wavefield rather than a set of scalar fields. According to the characteristics of seismic signals, we implement this type of multivariate filtering along local events. First, the optimal local events are recognized according to the similarity between the vector signals which are windowed from neighbouring seismic traces with a sliding time window along each trial trajectory. An efficient strategy is used to reduce the computational cost of similarity measurement for vector signals. Next, one vector sample each from the neighbouring traces are extracted along the optimal local event as the input data for a multivariate filter. Different multivariate filters are optimal for different noise. The multichannel modified trimmed mean (MTM) filter, as one of the multivariate order statistic filters, is applied to synthetic and field multicomponent seismic data to test its performance for attenuating white Gaussian noise. The results indicate that the multichannel MTM filter can attenuate noise while preserving the relative amplitude information of multicomponent seismic data more effectively than a single-channel filter.

  4. Filter vapor trap

    DOEpatents

    Guon, Jerold

    1976-04-13

    A sintered filter trap is adapted for insertion in a gas stream of sodium vapor to condense and deposit sodium thereon. The filter is heated and operated above the melting temperature of sodium, resulting in a more efficient means to remove sodium particulates from the effluent inert gas emanating from the surface of a liquid sodium pool. Preferably the filter leaves are precoated with a natrophobic coating such as tetracosane.

  5. Hybrid Filter Membrane

    NASA Technical Reports Server (NTRS)

    Laicer, Castro; Rasimick, Brian; Green, Zachary

    2012-01-01

    Cabin environmental control is an important issue for a successful Moon mission. Due to the unique environment of the Moon, lunar dust control is one of the main problems that significantly diminishes the air quality inside spacecraft cabins. Therefore, this innovation was motivated by NASA s need to minimize the negative health impact that air-suspended lunar dust particles have on astronauts in spacecraft cabins. It is based on fabrication of a hybrid filter comprising nanofiber nonwoven layers coated on porous polymer membranes with uniform cylindrical pores. This design results in a high-efficiency gas particulate filter with low pressure drop and the ability to be easily regenerated to restore filtration performance. A hybrid filter was developed consisting of a porous membrane with uniform, micron-sized, cylindrical pore channels coated with a thin nanofiber layer. Compared to conventional filter media such as a high-efficiency particulate air (HEPA) filter, this filter is designed to provide high particle efficiency, low pressure drop, and the ability to be regenerated. These membranes have well-defined micron-sized pores and can be used independently as air filters with discreet particle size cut-off, or coated with nanofiber layers for filtration of ultrafine nanoscale particles. The filter consists of a thin design intended to facilitate filter regeneration by localized air pulsing. The two main features of this invention are the concept of combining a micro-engineered straight-pore membrane with nanofibers. The micro-engineered straight pore membrane can be prepared with extremely high precision. Because the resulting membrane pores are straight and not tortuous like those found in conventional filters, the pressure drop across the filter is significantly reduced. The nanofiber layer is applied as a very thin coating to enhance filtration efficiency for fine nanoscale particles. Additionally, the thin nanofiber coating is designed to promote capture of

  6. Practical alarm filtering

    SciTech Connect

    Bray, M.; Corsberg, D. )

    1994-02-01

    An expert system-based alarm filtering method is described which prioritizes and reduces the number of alarms facing an operator. This patented alarm filtering methodology was originally developed and implemented in a pressurized water reactor, and subsequently in a chemical processing facility. Both applications were in LISP and both were successful. In the chemical processing facility, for instance, alarm filtering reduced the quantity of alarm messages by 90%. 6 figs.

  7. Design-Filter Selection for H2 Control of Microgravity Isolation Systems: A Single-Degree-of-Freedom Case Study

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Whorton, Mark S.

    2000-01-01

    Many microgravity space-science experiments require active vibration isolation, to attain suitably low levels of background acceleration for useful experimental results. The design of state-space controllers by optimal control methods requires judicious choices of frequency-weighting design filters. Kinematic coupling among states greatly clouds designer intuition in the choices of these filters, and the masking effects of the state observations cloud the process further. Recent research into the practical application of H2 synthesis methods to such problems, indicates that certain steps can lead to state frequency-weighting design-filter choices with substantially improved promise of usefulness, even in the face of these difficulties. In choosing these filters on the states, one considers their relationships to corresponding design filters on appropriate pseudo-sensitivity- and pseudo-complementary-sensitivity functions. This paper investigates the application of these considerations to a single-degree-of-freedom microgravity vibration-isolation test case. Significant observations that were noted during the design process are presented. along with explanations based on the existent theory for such problems.

  8. Nanofiber Filters Eliminate Contaminants

    NASA Technical Reports Server (NTRS)

    2009-01-01

    With support from Phase I and II SBIR funding from Johnson Space Center, Argonide Corporation of Sanford, Florida tested and developed its proprietary nanofiber water filter media. Capable of removing more than 99.99 percent of dangerous particles like bacteria, viruses, and parasites, the media was incorporated into the company's commercial NanoCeram water filter, an inductee into the Space Foundation's Space Technology Hall of Fame. In addition to its drinking water filters, Argonide now produces large-scale nanofiber filters used as part of the reverse osmosis process for industrial water purification.

  9. Linear phase compressive filter

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.

  10. Linear phase compressive filter

    DOEpatents

    McEwan, T.E.

    1995-06-06

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.

  11. Frequency weighting filter design for automotive ride comfort evaluation

    NASA Astrophysics Data System (ADS)

    Du, Feng

    2016-07-01

    Few study gives guidance to design weighting filters according to the frequency weighting factors, and the additional evaluation method of automotive ride comfort is not made good use of in some countries. Based on the regularities of the weighting factors, a method is proposed and the vertical and horizontal weighting filters are developed. The whole frequency range is divided several times into two parts with respective regularity. For each division, a parallel filter constituted by a low- and a high-pass filter with the same cutoff frequency and the quality factor is utilized to achieve section factors. The cascading of these parallel filters obtains entire factors. These filters own a high order. But, low order filters are preferred in some applications. The bilinear transformation method and the least P-norm optimal infinite impulse response(IIR) filter design method are employed to develop low order filters to approximate the weightings in the standard. In addition, with the window method, the linear phase finite impulse response(FIR) filter is designed to keep the signal from distorting and to obtain the staircase weighting. For the same case, the traditional method produces 0.330 7 m • s-2 weighted root mean square(r.m.s.) acceleration and the filtering method gives 0.311 9 m • s-2 r.m.s. The fourth order filter for approximation of vertical weighting obtains 0.313 9 m • s-2 r.m.s. Crest factors of the acceleration signal weighted by the weighting filter and the fourth order filter are 3.002 7 and 3.011 1, respectively. This paper proposes several methods to design frequency weighting filters for automotive ride comfort evaluation, and these developed weighting filters are effective.

  12. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  13. A method for improving time-stepping numerics

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-04-01

    In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.

  14. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  15. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  16. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  17. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  18. 40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...

  19. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers

    PubMed Central

    Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m−2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m−2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037

  20. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers.

    PubMed

    Buyel, Johannes F; Gruchow, Hannah M; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m(-2) when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre-coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m(-2) with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins.

  1. Filter holder and gasket assembly for candle or tube filters

    DOEpatents

    Lippert, T.E.; Alvin, M.A.; Bruck, G.J.; Smeltzer, E.E.

    1999-03-02

    A filter holder and gasket assembly are disclosed for holding a candle filter element within a hot gas cleanup system pressure vessel. The filter holder and gasket assembly includes a filter housing, an annular spacer ring securely attached within the filter housing, a gasket sock, a top gasket, a middle gasket and a cast nut. 9 figs.

  2. Filter holder and gasket assembly for candle or tube filters

    DOEpatents

    Lippert, Thomas Edwin; Alvin, Mary Anne; Bruck, Gerald Joseph; Smeltzer, Eugene E.

    1999-03-02

    A filter holder and gasket assembly for holding a candle filter element within a hot gas cleanup system pressure vessel. The filter holder and gasket assembly includes a filter housing, an annular spacer ring securely attached within the filter housing, a gasket sock, a top gasket, a middle gasket and a cast nut.

  3. Aircraft Recirculation Filter for Air-Quality and Incident Assessment

    PubMed Central

    Eckels, Steven J.; Jones, Byron; Mann, Garrett; Mohan, Krishnan R.; Weisel, Clifford P.

    2015-01-01

    The current research examines the possibility of using recirculation filters from aircraft to document the nature of air-quality incidents on aircraft. These filters are highly effective at collecting solid and liquid particulates. Identification of engine oil contaminants arriving through the bleed air system on the filter was chosen as the initial focus. A two-step study was undertaken. First, a compressor/bleed air simulator was developed to simulate an engine oil leak, and samples were analyzed with gas chromatograph-mass spectrometry. These samples provided a concrete link between tricresyl phosphates and a homologous series of synthetic pentaerythritol esters from oil and contaminants found on the sample paper. The second step was to test 184 used aircraft filters with the same gas chromatograph-mass spectrometry system; of that total, 107 were standard filters, and 77 were nonstandard. Four of the standard filters had both markers for oil, with the homologous series synthetic pentaerythritol esters being the less common marker. It was also found that 90% of the filters had some detectable level of tricresyl phosphates. Of the 77 nonstandard filters, 30 had both markers for oil, a significantly higher percent than the standard filters. PMID:25641977

  4. Cyclic steps on ice

    NASA Astrophysics Data System (ADS)

    Yokokawa, M.; Izumi, N.; Naito, K.; Parker, G.; Yamada, T.; Greve, R.

    2016-05-01

    Boundary waves often form at the interface between ice and fluid flowing adjacent to it, such as ripples under river ice covers, and steps on the bed of supraglacial meltwater channels. They may also be formed by wind, such as the megadunes on the Antarctic ice sheet. Spiral troughs on the polar ice caps of Mars have been interpreted to be cyclic steps formed by katabatic wind blowing over ice. Cyclic steps are relatives of upstream-migrating antidunes. Cyclic step formation on ice is not only a mechanical but also a thermodynamic process. There have been very few studies on the formation of either cyclic steps or upstream-migrating antidunes on ice. In this study, we performed flume experiments to reproduce cyclic steps on ice by flowing water, and found that trains of steps form when the Froude number is larger than unity. The features of those steps allow them to be identified as ice-bed analogs of cyclic steps in alluvial and bedrock rivers. We performed a linear stability analysis and obtained a physical explanation of the formation of upstream-migrating antidunes, i.e., precursors of cyclic steps. We compared the results of experiments with the predictions of the analysis and found the observed steps fall in the range where the analysis predicts interfacial instability. We also found that short antidune-like undulations formed as a precursor to the appearance of well-defined steps. This fact suggests that such antidune-like undulations correspond to the instability predicted by the analysis and are precursors of cyclic steps.

  5. Filtering reprecipitated slurry

    SciTech Connect

    Morrissey, M.F.

    1992-01-01

    As part of the Late Washing Demonstration at Savannah River Technology Center, Interim Waste Technology has filtered reprecipitated and non reprecipitated slurry with the Experimental Laboratory Filter (ELF) at TNX. Reprecipitated slurry generates higher permeate fluxes than non reprecipitated slurry. Washing reprecipitated slurry may require a defoamer because reprecipitation encourages foaming.

  6. Filtering reprecipitated slurry

    SciTech Connect

    Morrissey, M.F.

    1992-12-31

    As part of the Late Washing Demonstration at Savannah River Technology Center, Interim Waste Technology has filtered reprecipitated and non reprecipitated slurry with the Experimental Laboratory Filter (ELF) at TNX. Reprecipitated slurry generates higher permeate fluxes than non reprecipitated slurry. Washing reprecipitated slurry may require a defoamer because reprecipitation encourages foaming.

  7. Active rejector filter

    SciTech Connect

    Kuchinskii, A.G.; Pirogov, S.G.; Savchenko, V.M.; Yakushev, A.K.

    1985-01-01

    This paper describes an active rejector filter for suppressing noise signals in the frequency range 50-100 Hz and for extracting a vlf information signal. The filter has the following characteristics: a high input impedance, a resonant frequency of 75 Hz, a Q of 1.25, and an attenuation factor of 53 dB at resonant frequency.

  8. Kalman filter based control for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry

    2004-12-01

    Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.

  9. Weighted guided image filtering.

    PubMed

    Li, Zhengguo; Zheng, Jinghong; Zhu, Zijian; Yao, Wei; Wu, Shiqian

    2015-01-01

    It is known that local filtering-based edge preserving smoothing techniques suffer from halo artifacts. In this paper, a weighted guided image filter (WGIF) is introduced by incorporating an edge-aware weighting into an existing guided image filter (GIF) to address the problem. The WGIF inherits advantages of both global and local smoothing filters in the sense that: 1) the complexity of the WGIF is O(N) for an image with N pixels, which is same as the GIF and 2) the WGIF can avoid halo artifacts like the existing global smoothing filters. The WGIF is applied for single image detail enhancement, single image haze removal, and fusion of differently exposed images. Experimental results show that the resultant algorithms produce images with better visual quality and at the same time halo artifacts can be reduced/avoided from appearing in the final images with negligible increment on running times. PMID:25415986

  10. Sintered composite filter

    DOEpatents

    Bergman, W.

    1986-05-02

    A particulate filter medium formed of a sintered composite of 0.5 micron diameter quartz fibers and 2 micron diameter stainless steel fibers is described. Preferred composition is about 40 vol.% quartz and about 60 vol.% stainless steel fibers. The media is sintered at about 1100/sup 0/C to bond the stainless steel fibers into a cage network which holds the quartz fibers. High filter efficiency and low flow resistance are provided by the smaller quartz fibers. High strength is provided by the stainless steel fibers. The resulting media has a high efficiency and low pressure drop similar to the standard HEPA media, with tensile strength at least four times greater, and a maximum operating temperature of about 550/sup 0/C. The invention also includes methods to form the composite media and a HEPA filter utilizing the composite media. The filter media can be used to filter particles in both liquids and gases.

  11. Sub-micron filter

    DOEpatents

    Tepper, Frederick; Kaledin, Leonid

    2009-10-13

    Aluminum hydroxide fibers approximately 2 nanometers in diameter and with surface areas ranging from 200 to 650 m.sup.2/g have been found to be highly electropositive. When dispersed in water they are able to attach to and retain electronegative particles. When combined into a composite filter with other fibers or particles they can filter bacteria and nano size particulates such as viruses and colloidal particles at high flux through the filter. Such filters can be used for purification and sterilization of water, biological, medical and pharmaceutical fluids, and as a collector/concentrator for detection and assay of microbes and viruses. The alumina fibers are also capable of filtering sub-micron inorganic and metallic particles to produce ultra pure water. The fibers are suitable as a substrate for growth of cells. Macromolecules such as proteins may be separated from each other based on their electronegative charges.

  12. Method of producing monolithic ceramic cross-flow filter

    DOEpatents

    Larsen, D.A.; Bacchi, D.P.; Connors, T.F.; Collins, E.L. III

    1998-02-10

    Ceramic filter of various configuration have been used to filter particulates from hot gases exhausted from coal-fired systems. Prior ceramic cross-flow filters have been favored over other types, but those previously have been assemblies of parts somehow fastened together and consequently subject often to distortion or delamination on exposure hot gas in normal use. The present new monolithic, seamless, cross-flow ceramic filters, being of one-piece construction, are not prone to such failure. Further, these new products are made by a novel casting process which involves the key steps of demolding the ceramic filter green body so that none of the fragile inner walls of the filter is cracked or broken. 2 figs.

  13. Method of producing monolithic ceramic cross-flow filter

    DOEpatents

    Larsen, David A.; Bacchi, David P.; Connors, Timothy F.; Collins, III, Edwin L.

    1998-01-01

    Ceramic filter of various configuration have been used to filter particulates from hot gases exhausted from coal-fired systems. Prior ceramic cross-flow filters have been favored over other types, but those previously horn have been assemblies of parts somehow fastened together and consequently subject often to distortion or delamination on exposure hot gas in normal use. The present new monolithic, seamless, cross-flow ceramic filters, being of one-piece construction, are not prone to such failure. Further, these new products are made by novel casting process which involves the key steps of demolding the ceramic filter green body so that none of the fragile inner walls of the filter is cracked or broken.

  14. Modified dislocation filter method: toward growth of GaAs on Si by metal organic chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Hu, Haiyang; Wang, Jun; He, Yunrui; Liu, Kai; Liu, Yuanyuan; Wang, Qi; Duan, Xiaofeng; Huang, Yongqing; Ren, Xiaomin

    2016-06-01

    In this paper, metamorphic growth of GaAs on (001) oriented Si substrate, with a combination method of applying dislocation filter layer (DFL) and three-step growth process, was conducted by metal organic chemical vapor deposition. The effectiveness of the multiple InAs/GaAs self-organized quantum dot (QD) layers acting as a dislocation filter was researched in detail. And the growth conditions of the InAs QDs were optimized by theoretical calculations and experiments. A 2-μm-thick buffer layer was grown on the Si substrate with the three-step growth method according to the optimized growth conditions. Then, a 114-nm-thick DFL and a 1-μm-thick GaAs epilayer were grown. The results we obtained demonstrated that the DFL can effectively bend dislocation direction via the strain field around the QDs. The optimal structure of the DFL is composed of three-layer InAs QDs with a growth time of 55 s. The method could reduce the etch pit density from about 3 × 106 cm-2 to 9 × 105 cm-2 and improve the crystalline quality of the GaAs epilayers on Si.

  15. Fouling of ceramic filters and thin-film composite reverse osmosis membranes by inorganic and bacteriological constituents

    SciTech Connect

    Siler, J.L.; Poirier, M.R.; McCabe, D.J.; Hazen, T.C.

    1991-01-01

    Two significant problems have been identified during the first three years of operating the Savannah River Site Effluent Treatment Facility. These problems encompass two of the facility's major processing areas: the microfiltration and reverse osmosis steps. The microfilters (crossflow ceramic filters {minus}0.2{mu} nominal pore size) have been prone to pluggage problems. The presence of bacteria and bacteria byproducts in the microfilter feed, along with small quantities of colloidal iron, silica, and aluminum, results in a filter foulant that rapidly deteriorates filter performance and is difficult to remove by chemical cleaning. Processing rates through the filters have dropped from the design flow rate of 300 gpm after cleaning to 60 gpm within minutes. The combination of bacteria (from internal sources) and low concentrations of inorganic species resulted in substantial reductions in the reverse osmosis system performance. The salt rejection has been found to decrease from 99+% to 97%, along with a 50% loss in throughput, within a few hours of cleaning. Experimental work has led to implementation of several changes to plant operation and to planned upgrades of existing equipment. It has been shown that biological control in the influent is necessary to achieve design flowrates. Experiments have also shown that the filter performance can be optimized by the use of efficient filter backpulsing and the addition of aluminum nitrate (15 to 30 mg/L Al{sup 3+}) to the filter feed. The aluminum nitrate assists by controlling adsorption of colloidal inorganic precipitates and biological contaminants. In addition, improved cleaning procedures have been identified for the reverse osmosis units. This paper provides a summary of the plant problems and the experimental work that has been completed to understand and correct these problems.

  16. Fouling of ceramic filters and thin-film composite reverse osmosis membranes by inorganic and bacteriological constituents

    SciTech Connect

    Siler, J.L.; Poirier, M.R.; McCabe, D.J.; Hazen, T.C.

    1991-12-31

    Two significant problems have been identified during the first three years of operating the Savannah River Site Effluent Treatment Facility. These problems encompass two of the facility`s major processing areas: the microfiltration and reverse osmosis steps. The microfilters (crossflow ceramic filters {minus}0.2{mu} nominal pore size) have been prone to pluggage problems. The presence of bacteria and bacteria byproducts in the microfilter feed, along with small quantities of colloidal iron, silica, and aluminum, results in a filter foulant that rapidly deteriorates filter performance and is difficult to remove by chemical cleaning. Processing rates through the filters have dropped from the design flow rate of 300 gpm after cleaning to 60 gpm within minutes. The combination of bacteria (from internal sources) and low concentrations of inorganic species resulted in substantial reductions in the reverse osmosis system performance. The salt rejection has been found to decrease from 99+% to 97%, along with a 50% loss in throughput, within a few hours of cleaning. Experimental work has led to implementation of several changes to plant operation and to planned upgrades of existing equipment. It has been shown that biological control in the influent is necessary to achieve design flowrates. Experiments have also shown that the filter performance can be optimized by the use of efficient filter backpulsing and the addition of aluminum nitrate (15 to 30 mg/L Al{sup 3+}) to the filter feed. The aluminum nitrate assists by controlling adsorption of colloidal inorganic precipitates and biological contaminants. In addition, improved cleaning procedures have been identified for the reverse osmosis units. This paper provides a summary of the plant problems and the experimental work that has been completed to understand and correct these problems.

  17. Distributed Fusion Receding Horizon Filtering in Linear Stochastic Systems

    NASA Astrophysics Data System (ADS)

    Song, IlYoung; Kim, DuYong; Kim, YongHoon; Lee, SukJae; Shin, Vladimir

    2009-12-01

    This paper presents a distributed receding horizon filtering algorithm for multisensor continuous-time linear stochastic systems. Distributed fusion with a weighted sum structure is applied to local receding horizon Kalman filters having different horizon lengths. The fusion estimate of the state of a dynamic system represents the optimal linear fusion by weighting matrices under the minimum mean square error criterion. The key contribution of this paper lies in the derivation of the differential equations for determining the error cross-covariances between the local receding horizon Kalman filters. The subsequent application of the proposed distributed filter to a linear dynamic system within a multisensor environment demonstrates its effectiveness.

  18. Molecular circuits for dynamic noise filtering.

    PubMed

    Zechner, Christoph; Seelig, Georg; Rullan, Marc; Khammash, Mustafa

    2016-04-26

    The invention of the Kalman filter is a crowning achievement of filtering theory-one that has revolutionized technology in countless ways. By dealing effectively with noise, the Kalman filter has enabled various applications in positioning, navigation, control, and telecommunications. In the emerging field of synthetic biology, noise and context dependency are among the key challenges facing the successful implementation of reliable, complex, and scalable synthetic circuits. Although substantial further advancement in the field may very well rely on effectively addressing these issues, a principled protocol to deal with noise-as provided by the Kalman filter-remains completely missing. Here we develop an optimal filtering theory that is suitable for noisy biochemical networks. We show how the resulting filters can be implemented at the molecular level and provide various simulations related to estimation, system identification, and noise cancellation problems. We demonstrate our approach in vitro using DNA strand displacement cascades as well as in vivo using flow cytometry measurements of a light-inducible circuit in Escherichia coli. PMID:27078094

  19. Input filter compensation for switching regulators

    NASA Technical Reports Server (NTRS)

    Kelkar, S. S.; Lee, F. C.

    1983-01-01

    A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.

  20. Design and fabrication of ultra-steep notch filters.

    PubMed

    Zhang, Jinlong; Tikhonravov, Alexander V; Trubetskov, Michael K; Liu, Yongli; Cheng, Xinbin; Wang, Zhanshan

    2013-09-01

    We present the design and production approach of an ultra-steep notch filter. The notch filter that does not have thin layers is optimized utilizing the constrained optimization technique, and this is well suitable for accurate monitoring with the electron beam deposition technique. Single layer SiO(2) and Ta(2)O(5) films were deposited and carefully characterized in order to determine tooling factors and refractive indices wavelength dependencies accurately. We produced the ultra-steep notch filter with indirect monochromatic monitoring strategy and demonstrated the excellent correspondence to the theoretical spectral performance.

  1. Golgi-Cox Staining Step by Step

    PubMed Central

    Zaqout, Sami; Kaindl, Angela M.

    2016-01-01

    Golgi staining remains a key method to study neuronal morphology in vivo. Since most protocols delineating modifications of the original staining method lack details on critical steps, establishing this method in a laboratory can be time-consuming and frustrating. Here, we describe the Golgi-Cox staining in such detail that should turn the staining into an easily feasible method for all scientists working in the neuroscience field. PMID:27065817

  2. Detection of Steps in Single Molecule Data

    PubMed Central

    Aggarwal, Tanuj; Materassi, Donatello; Davison, Robert; Hays, Thomas; Salapaka, Murti

    2013-01-01

    Over the past few decades, single molecule investigations employing optical tweezers, AFM and TIRF microscopy have revealed that molecular behaviors are typically characterized by discrete steps or events that follow changes in protein conformation. These events, that manifest as steps or jumps, are short-lived transitions between otherwise more stable molecular states. A major limiting factor in determining the size and timing of the steps is the noise introduced by the measurement system. To address this impediment to the analysis of single molecule behaviors, step detection algorithms incorporate large records of data and provide objective analysis. However, existing algorithms are mostly based on heuristics that are not reliable and lack objectivity. Most of these step detection methods require the user to supply parameters that inform the search for steps. They work well, only when the signal to noise ratio (SNR) is high and stepping speed is low. In this report, we have developed a novel step detection method that performs an objective analysis on the data without input parameters, and based only on the noise statistics. The noise levels and characteristics can be estimated from the data providing reliable results for much smaller SNR and higher stepping speeds. An iterative learning process drives the optimization of step-size distributions for data that has unimodal step-size distribution, and produces extremely low false positive outcomes and high accuracy in finding true steps. Our novel methodology, also uniquely incorporates compensation for the smoothing affects of probe dynamics. A mechanical measurement probe typically takes a finite time to respond to step changes, and when steps occur faster than the probe response time, the sharp step transitions are smoothed out and can obscure the step events. To address probe dynamics we accept a model for the dynamic behavior of the probe and invert it to reveal the steps. No other existing method addresses

  3. PHD filtering with localised target number variance

    NASA Astrophysics Data System (ADS)

    Delande, Emmanuel; Houssineau, Jérémie; Clark, Daniel

    2013-05-01

    Mahler's Probability Hypothesis Density (PHD filter), proposed in 2000, addresses the challenges of the multipletarget detection and tracking problem by propagating a mean density of the targets in any region of the state space. However, when retrieving some local evidence on the target presence becomes a critical component of a larger process - e.g. for sensor management purposes - the local target number is insufficient unless some confidence on the estimation of the number of targets can be provided as well. In this paper, we propose a first implementation of a PHD filter that also includes an estimation of localised variance in the target number following each update step; we then illustrate the advantage of the PHD filter + variance on simulated data from a multiple-target scenario.

  4. Development and Behavior of Metallic Filter Element and Numerical Simulation of Transport Phenomena during Filter Regeneration Process

    SciTech Connect

    Kuang, C.; Zhang, J.; Wang, F.; Chen, J.

    2002-09-19

    Ceramic filters have revealed to have good thermal resistance and chemical corrosion resistance, but they are brittle and lack of toughness, and liable to rupture under large temperature swings. Metallic filters with their high strength and toughness and good heat conduction ability have showed good thermal shock resistance, 310S and FeAl intermetallic filter elements have exhibited additionally good chemical corrosion resistance in oxidizing and sulfidizing atmosphere( Sawada 1999 and Sunil et al. 1999). The behavior of metallic filter elements at high temperature was investigated and the filtration efficiency of the filter units for hot gas from a coal gasifier unit was tested. Pulse-jet cleaning of filter elements is a key component in the operation of the filtration unit. The pulse-jet is introduced into the filter element cavities from the clean side, and the dust cakes on the outer surfaces of the filter elements are detached and fall into the filter vessel. Sequential on-line cleaning of filter element groups yields a filter operation with no shutdown for filter regeneration. Development of advanced technologies in the design and operation of the pulse cleaning is one of the important tasks in order to increase the system reliability, to improve the filter life and to increase the filtering performance. The regeneration of filter element in gas filtration at high temperature plays a very important role for the operation of the process. Based on experimental observation and field operation, a numerical model is set up to numerically simulate the momentum and heat transport phenomena in the regeneration process, which is essential for understanding of the process, the optimization of process parameters and improvement of the design of the structure of venturi nozzle and the configuration of the apparatus.

  5. Ceramic fiber reinforced filter

    DOEpatents

    Stinton, David P.; McLaughlin, Jerry C.; Lowden, Richard A.

    1991-01-01

    A filter for removing particulate matter from high temperature flowing fluids, and in particular gases, that is reinforced with ceramic fibers. The filter has a ceramic base fiber material in the form of a fabric, felt, paper of the like, with the refractory fibers thereof coated with a thin layer of a protective and bonding refractory applied by chemical vapor deposition techniques. This coating causes each fiber to be physically joined to adjoining fibers so as to prevent movement of the fibers during use and to increase the strength and toughness of the composite filter. Further, the coating can be selected to minimize any reactions between the constituents of the fluids and the fibers. A description is given of the formation of a composite filter using a felt preform of commercial silicon carbide fibers together with the coating of these fibers with pure silicon carbide. Filter efficiency approaching 100% has been demonstrated with these filters. The fiber base material is alternately made from aluminosilicate fibers, zirconia fibers and alumina fibers. Coating with Al.sub.2 O.sub.3 is also described. Advanced configurations for the composite filter are suggested.

  6. Principal Component Noise Filtering for NAST-I Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Smith, William L., Sr.

    2011-01-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.

  7. Adaptive filtering image preprocessing for smart FPA technology

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey W.

    1995-05-01

    This paper discusses two applications of adaptive filters for image processing on parallel architectures. The first, based on the results of previously accomplished work, summarizes the analyses of various adaptive filters implemented for pixel-level image prediction. FIR filters, fixed and adaptive IIR filters, and various variable step size algorithms were compared with a focus on algorithm complexity against the ability to predict future pixel values. A gaussian smoothing operation with varying spatial and temporal constants were also applied for comparisons of random noise reductions. The second application is a suggestion to use memory-adaptive IIR filters for detecting and tracking motion within an image. Objects within an image are made of edges, or segments, with varying degrees of motion. An application has been previously published that describes FIR filters connecting pixels and using correlations to determine motion and direction. This implementation seems limited to detecting motion coinciding with FIR filter operation rate and the associated harmonics. Upgrading the FIR structures with adaptive IIR structures can eliminate these limitations. These and any other pixel-level adaptive filtering application require data memory for filter parameters and some basic computational capability. Tradeoffs have to be made between chip real estate and these desired features. System tradeoffs will also have to be made as to where it makes the most sense to do which level of processing. Although smart pixels may not be ready to implement adaptive filters, applications such as these should give the smart pixel designer some long range goals.

  8. Toward the Application of the Implicit Particle Filter to Real Data in a Shallow Water Model of the Nearshore Ocean

    NASA Astrophysics Data System (ADS)

    Miller, R.

    2015-12-01

    Following the success of the implicit particle filter in twin experiments with a shallow water model of the nearshore environment, the planned next step is application to the intensive Sandy Duck data set, gathered at Duck, NC. Adaptation of the present system to the Sandy Duck data set will require construction and evaluation of error models for both the model and the data, as well as significant modification of the system to allow for the properties of the data set. Successful implementation of the particle filter promises to shed light on the details of the capabilities and limitations of shallow water models of the nearshore ocean relative to more detailed models. Since the shallow water model admits distinct dynamical regimes, reliable parameter estimation will be important. Previous work by other groups give cause for optimism. In this talk I will describe my progress toward implementation of the new system, including problems solved, pitfalls remaining and preliminary results

  9. Solc filter engineering

    NASA Technical Reports Server (NTRS)

    Rosenberg, W. J.; Title, A. M.

    1982-01-01

    A Solc (1965) filter configuration is presented which is both tunable and spectrally variable, since it possesses an adjustable bandwidth, and which although less efficient than a Lyot filter is attractive because of its spectral versatility. The lossless design, using only an entrance and exit polarizer, improves throughput generally and especially in the IR, where polarizers are less convenient than dichroic sheet polarizers. Attention is given to the transmission profiles of Solc filters with different numbers of elements and split elements, as well as their mechanical design features.

  10. Multilevel filtering elliptic preconditioners

    NASA Technical Reports Server (NTRS)

    Kuo, C. C. Jay; Chan, Tony F.; Tong, Charles

    1989-01-01

    A class of preconditioners is presented for elliptic problems built on ideas borrowed from the digital filtering theory and implemented on a multilevel grid structure. They are designed to be both rapidly convergent and highly parallelizable. The digital filtering viewpoint allows the use of filter design techniques for constructing elliptic preconditioners and also provides an alternative framework for understanding several other recently proposed multilevel preconditioners. Numerical results are presented to assess the convergence behavior of the new methods and to compare them with other preconditioners of multilevel type, including the usual multigrid method as preconditioner, the hierarchical basis method and a recent method proposed by Bramble-Pasciak-Xu.

  11. HEPA filter jointer

    SciTech Connect

    Hill, D.; Martinez, H.E.

    1998-02-01

    A HEPA filter jointer system was created to remove nitrate contaminated wood from the wooden frames of HEPA filters that are stored at the Rocky Flats Plant. A commercial jointer was chosen to remove the nitrated wood. The chips from the wood removal process are in the right form for caustic washing. The jointer was automated for safety and ease of operation. The HEPA filters are prepared for jointing by countersinking the nails with a modified air hammer. The equipment, computer program, and tests are described in this report.

  12. STEP Experiment Requirements

    NASA Technical Reports Server (NTRS)

    Brumfield, M. L. (Compiler)

    1984-01-01

    A plan to develop a space technology experiments platform (STEP) was examined. NASA Langley Research Center held a STEP Experiment Requirements Workshop on June 29 and 30 and July 1, 1983, at which experiment proposers were invited to present more detailed information on their experiment concept and requirements. A feasibility and preliminary definition study was conducted and the preliminary definition of STEP capabilities and experiment concepts and expected requirements for support services are presented. The preliminary definition of STEP capabilities based on detailed review of potential experiment requirements is investigated. Topics discussed include: Shuttle on-orbit dynamics; effects of the space environment on damping materials; erectable beam experiment; technology for development of very large solar array deployers; thermal energy management process experiment; photovoltaic concentrater pointing dynamics and plasma interactions; vibration isolation technology; flight tests of a synthetic aperture radar antenna with use of STEP.

  13. Tunable Microwave Filter Design Using Thin-Film Ferroelectric Varactors

    NASA Astrophysics Data System (ADS)

    Haridasan, Vrinda

    Military, space, and consumer-based communication markets alike are moving towards multi-functional, multi-mode, and portable transceiver units. Ferroelectric-based tunable filter designs in RF front-ends are a relatively new area of research that provides a potential solution to support wideband and compact transceiver units. This work presents design methodologies developed to optimize a tunable filter design for system-level integration, and to improve the performance of a ferroelectric-based tunable bandpass filter. An investigative approach to find the origins of high insertion loss exhibited by these filters is also undertaken. A system-aware design guideline and figure of merit for ferroelectric-based tunable band- pass filters is developed. The guideline does not constrain the filter bandwidth as long as it falls within the range of the analog bandwidth of a system's analog to digital converter. A figure of merit (FOM) that optimizes filter design for a specific application is presented. It considers the worst-case filter performance parameters and a tuning sensitivity term that captures the relation between frequency tunability and the underlying material tunability. A non-tunable parasitic fringe capacitance associated with ferroelectric-based planar capacitors is confirmed by simulated and measured results. The fringe capacitance is an appreciable proportion of the tunable capacitance at frequencies of X-band and higher. As ferroelectric-based tunable capac- itors form tunable resonators in the filter design, a proportionally higher fringe capacitance reduces the capacitance tunability which in turn reduces the frequency tunability of the filter. Methods to reduce the fringe capacitance can thus increase frequency tunability or indirectly reduce the filter insertion-loss by trading off the increased tunability achieved to lower loss. A new two-pole tunable filter topology with high frequency tunability (> 30%), steep filter skirts, wide stopband

  14. Particle flow for nonlinear filters with log-homotopy

    NASA Astrophysics Data System (ADS)

    Daum, Fred; Huang, Jim

    2008-04-01

    We describe a new nonlinear filter that is vastly superior to the classic particle filter. In particular, the computational complexity of the new filter is many orders of magnitude less than the classic particle filter with optimal estimation accuracy for problems with dimension greater than 2 or 3. We consider nonlinear estimation problems with dimensions varying from 1 to 20 that are smooth and fully coupled (i.e. dense not sparse). The new filter implements Bayes' rule using particle flow rather than with a pointwise multiplication of two functions; this avoids one of the fundamental and well known problems in particle filters, namely "particle collapse" as a result of Bayes' rule. We use a log-homotopy to derive the ODE that describes particle flow. This paper was written for normal engineers, who do not have homotopy for breakfast.

  15. High-Resolution Cortical Dipole Imaging Using Spatial Inverse Filter Based on Filtering Property

    PubMed Central

    2016-01-01

    Cortical dipole imaging has been developed to visualize brain electrical activity in high spatial resolution. It is necessary to solve an inverse problem to estimate the cortical dipole distribution from the scalp potentials. In the present study, the accuracy of cortical dipole imaging was improved by focusing on filtering property of the spatial inverse filter. We proposed an inverse filter that optimizes filtering property using a sigmoid function. The ability of the proposed method was compared with the traditional inverse techniques, such as Tikhonov regularization, truncated singular value decomposition (TSVD), and truncated total least squares (TTLS), in a computer simulation. The proposed method was applied to human experimental data of visual evoked potentials. As a result, the estimation accuracy was improved and the localized dipole distribution was obtained with less noise. PMID:27688747

  16. High-Resolution Cortical Dipole Imaging Using Spatial Inverse Filter Based on Filtering Property

    PubMed Central

    2016-01-01

    Cortical dipole imaging has been developed to visualize brain electrical activity in high spatial resolution. It is necessary to solve an inverse problem to estimate the cortical dipole distribution from the scalp potentials. In the present study, the accuracy of cortical dipole imaging was improved by focusing on filtering property of the spatial inverse filter. We proposed an inverse filter that optimizes filtering property using a sigmoid function. The ability of the proposed method was compared with the traditional inverse techniques, such as Tikhonov regularization, truncated singular value decomposition (TSVD), and truncated total least squares (TTLS), in a computer simulation. The proposed method was applied to human experimental data of visual evoked potentials. As a result, the estimation accuracy was improved and the localized dipole distribution was obtained with less noise.

  17. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  18. Active-R filter

    DOEpatents

    Soderstrand, Michael A.

    1976-01-01

    An operational amplifier-type active filter in which the only capacitor in the circuit is the compensating capacitance of the operational amplifiers, the various feedback and coupling elements being essentially solely resistive.

  19. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  20. HEPA air filter (image)

    MedlinePlus

    ... pet dander and other irritating allergens from the air. Along with other methods to reduce allergens, such ... controlling the amount of allergens circulating in the air. HEPA filters can be found in most air ...