Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement
NASA Astrophysics Data System (ADS)
Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.
In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
Design of order statistics filters using feedforward neural networks
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Bochkarev, V. V.
2016-08-01
In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
Bowman, Wesley A; Robar, James L; Sattarivand, Mike
2017-03-01
Stereoscopic x-ray image guided radiotherapy for lung tumors is often hindered by bone overlap and limited soft-tissue contrast. This study aims to evaluate the feasibility of dual-energy imaging techniques and to optimize parameters of the ExacTrac stereoscopic imaging system to enhance soft-tissue imaging for application to lung stereotactic body radiation therapy. Simulated spectra and a physical lung phantom were used to optimize filter material, thickness, tube potentials, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number range (3-83) based on a metric defined to separate spectra of high and low-energies. Both energies used the same filter due to time constraints of imaging in the presence of respiratory motion. The lung phantom contained bone, soft tissue, and tumor mimicking materials, and it was imaged with a filter thickness in the range of (0-0.7) mm and a kVp range of (60-80) for low energy and (120,140) for high energy. Optimal dual-energy weighting factors were obtained when the bone to soft-tissue contrast-to-noise ratio (CNR) was minimized. Optimal filter thickness and tube potential were achieved by maximizing tumor-to-background CNR. Using the optimized parameters, dual-energy images of an anthropomorphic Rando phantom with a spherical tumor mimicking material inserted in his lung were acquired and evaluated for bone subtraction and tumor contrast. Imaging dose was measured using the dual-energy technique with and without beam filtration and matched to that of a clinical conventional single energy technique. Tin was the material of choice for beam filtering providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-weighted image in the lung phantom was obtained using 0.2 mm tin and (140, 60) kVp pair. Dual-energy images of the Rando phantom with the tin filter had noticeable improvement in bone elimination, tumor contrast, and noise content when compared to dual-energy imaging with no filtration. The surface dose was 0.52 mGy per each stereoscopic view for both clinical single energy technique and the dual-energy technique in both cases of with and without the tin filter. Dual-energy soft-tissue imaging is feasible without additional imaging dose using the ExacTrac stereoscopic imaging system with optimized acquisition parameters and no beam filtration. Addition of a single tin filter for both the high and low energies has noticeable improvements on dual-energy imaging with optimized parameters. Clinical implementation of a dual-energy technique on ExacTrac stereoscopic imaging could improve lung tumor visibility. © 2017 American Association of Physicists in Medicine.
Desensitized Optimal Filtering and Sensor Fusion Toolkit
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.
2015-01-01
Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.
Hernandez, Wilmar
2007-01-01
In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.
A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)
NASA Astrophysics Data System (ADS)
Li, Minghui; Hayward, Gordon
2017-02-01
The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.
NASA Technical Reports Server (NTRS)
Houts, R. C.; Burlage, D. W.
1972-01-01
A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.
Optimized Beam Sculpting with Generalized Fringe-rate Filters
NASA Astrophysics Data System (ADS)
Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina
2016-03-01
We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.
An optimal filter for short photoplethysmogram signals
Liang, Yongbo; Elgendi, Mohamed; Chen, Zhencheng; Ward, Rabab
2018-01-01
A photoplethysmogram (PPG) contains a wealth of cardiovascular system information, and with the development of wearable technology, it has become the basic technique for evaluating cardiovascular health and detecting diseases. However, due to the varying environments in which wearable devices are used and, consequently, their varying susceptibility to noise interference, effective processing of PPG signals is challenging. Thus, the aim of this study was to determine the optimal filter and filter order to be used for PPG signal processing to make the systolic and diastolic waves more salient in the filtered PPG signal using the skewness quality index. Nine types of filters with 10 different orders were used to filter 219 (2.1s) short PPG signals. The signals were divided into three categories by PPG experts according to their noise levels: excellent, acceptable, or unfit. Results show that the Chebyshev II filter can improve the PPG signal quality more effectively than other types of filters and that the optimal order for the Chebyshev II filter is the 4th order. PMID:29714722
Application of optimal control theory to the design of the NASA/JPL 70-meter antenna servos
NASA Technical Reports Server (NTRS)
Alvarez, L. S.; Nickerson, J.
1989-01-01
The application of Linear Quadratic Gaussian (LQG) techniques to the design of the 70-m axis servos is described. Linear quadratic optimal control and Kalman filter theory are reviewed, and model development and verification are discussed. Families of optimal controller and Kalman filter gain vectors were generated by varying weight parameters. Performance specifications were used to select final gain vectors.
Kumar, M; Mishra, S K
2017-01-01
The clinical magnetic resonance imaging (MRI) images may get corrupted due to the presence of the mixture of different types of noises such as Rician, Gaussian, impulse, etc. Most of the available filtering algorithms are noise specific, linear, and non-adaptive. There is a need to develop a nonlinear adaptive filter that adapts itself according to the requirement and effectively applied for suppression of mixed noise from different MRI images. In view of this, a novel nonlinear neural network based adaptive filter i.e. functional link artificial neural network (FLANN) whose weights are trained by a recently developed derivative free meta-heuristic technique i.e. teaching learning based optimization (TLBO) is proposed and implemented. The performance of the proposed filter is compared with five other adaptive filters and analyzed by considering quantitative metrics and evaluating the nonparametric statistical test. The convergence curve and computational time are also included for investigating the efficiency of the proposed as well as competitive filters. The simulation outcomes of proposed filter outperform the other adaptive filters. The proposed filter can be hybridized with other evolutionary technique and utilized for removing different noise and artifacts from others medical images more competently.
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
Comparison of weighting techniques for acoustic full waveform inversion
NASA Astrophysics Data System (ADS)
Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo
2017-12-01
To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.
Example-based human motion denoising.
Lou, Hui; Chai, Jinxiang
2010-01-01
With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dong Sik; Lee, Sanggyun
2013-06-15
Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less
Shank, B.; Yen, J. J.; Cabrera, B.; ...
2014-11-04
We present a detailed thermal and electrical model of superconducting transition edge sensors (TESs) connected to quasiparticle (qp) traps, such as the W TESs connected to Al qp traps used for CDMS (Cryogenic Dark Matter Search) Ge and Si detectors. We show that this improved model, together with a straightforward time-domain optimal filter, can be used to analyze pulses well into the nonlinear saturation region and reconstruct absorbed energies with optimal energy resolution.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement.
Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon
2017-02-24
The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement
Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon
2017-01-01
The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers. PMID:28245584
Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design
Troncoso Romero, David Ernesto
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674
Optimal sharpening of compensated comb decimation filters: analysis and design.
Troncoso Romero, David Ernesto; Laddomada, Massimiliano; Jovanovic Dolecek, Gordana
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature.
Superconducting Magnetometry for Cardiovascular Studies and AN Application of Adaptive Filtering.
NASA Astrophysics Data System (ADS)
Leifer, Mark Curtis
Sensitive magnetic detectors utilizing Superconducting Quantum Interference Devices (SQUID's) have been developed and used for studying the cardiovascular system. The theory of magnetic detection of cardiac currents is discussed, and new experimental data supporting the validity of the theory is presented. Measurements on both humans and dogs, in both healthy and diseased states, are presented using the new technique, which is termed vector magnetocardiography. In the next section, a new type of superconducting magnetometer with a room temperature pickup is analyzed, and techniques for optimizing its sensitivity to low-frequency sub-microamp currents are presented. Performance of the actual device displays significantly improved sensitivity in this frequency range, and the ability to measure currents in intact, in vivo biological fibers. The final section reviews the theoretical operation of a digital self-optimizing filter, and presents a four-channel software implementation of the system. The application of the adaptive filter to enhancement of geomagnetic signals for earthquake forecasting is discussed, and the adaptive filter is shown to outperform existing techniques in suppressing noise from geomagnetic records.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
Conversion and matched filter approximations for serial minimum-shift keyed modulation
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.
1982-01-01
Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.
Kaliyadan, Antony G; Chawla, Harnish; Fischman, David L; Ruggiero, Nicholas; Gannon, Michael; Walinsky, Paul; Savage, Michael P
2017-02-01
This study assessed the impact of adjunct delivery techniques on the deployment success of distal protection filters in saphenous vein grafts (SVGs). Despite their proven clinical benefit, distal protection devices are underutilized in SVG interventions. Deployment of distal protection filters can be technically challenging in the presence of complex anatomy. Techniques that facilitate the delivery success of these devices could potentially improve clinical outcomes and promote greater use of distal protection. Outcomes of 105 consecutive SVG interventions with attempted use of a FilterWire distal protection device (Boston Scientific) were reviewed. In patients in whom filter delivery initially failed, the success of attempted redeployment using adjunct delivery techniques was assessed. Two strategies were utilized sequentially: (1) a 0.014" moderate-stiffness hydrophilic guidewire was placed first to function as a parallel buddy wire to support subsequent FilterWire crossing; and (2) if the buddy-wire approach failed, predilation with a 2.0 mm balloon at low pressure was performed followed by reattempted filter delivery. The study population consisted of 80 men and 25 women aged 73 ± 10 years. Mean SVG age was 14 ± 6 years. Complex disease (American College of Cardiology/American Heart Association class B2 or C) was present in 92%. Initial delivery of the FilterWire was successful in 82/105 patients (78.1%). Of the 23 patients with initial failed delivery, 8 (35%) had successful deployment with a buddy wire alone, 7 (30%) had successful deployment with balloon predilation plus buddy wire, 4 (17%) had failed reattempt at deployment despite adjunct maneuvers, and in 4 (17%) no additional attempts at deployment were made at the operator's discretion. Deployment failure was reduced from 21.9% initially to 7.6% after use of adjunct delivery techniques (P<.01). No adverse events were observed with these measures. Deployment of distal protection devices can be technically difficult with complex SVG disease. Adjunct delivery techniques are important to optimize deployment success of distal protection filters during SVG intervention.
NASA Technical Reports Server (NTRS)
Canfield, Stephen
1999-01-01
This work will demonstrate the integration of sensor and system dynamic data and their appropriate models using an optimal filter to create a robust, adaptable, easily reconfigurable state (motion) estimation system. This state estimation system will clearly show the application of fundamental modeling and filtering techniques. These techniques are presented at a general, first principles level, that can easily be adapted to specific applications. An example of such an application is demonstrated through the development of an integrated GPS/INS navigation system. This system acquires both global position data and inertial body data, to provide optimal estimates of current position and attitude states. The optimal states are estimated using a Kalman filter. The state estimation system will include appropriate error models for the measurement hardware. The results of this work will lead to the development of a "black-box" state estimation system that supplies current motion information (position and attitude states) that can be used to carry out guidance and control strategies. This black-box state estimation system is developed independent of the vehicle dynamics and therefore is directly applicable to a variety of vehicles. Issues in system modeling and application of Kalman filtering techniques are investigated and presented. These issues include linearized models of equations of state, models of the measurement sensors, and appropriate application and parameter setting (tuning) of the Kalman filter. The general model and subsequent algorithm is developed in Matlab for numerical testing. The results of this system are demonstrated through application to data from the X-33 Michael's 9A8 mission and are presented in plots and simple animations.
Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390
Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Spectral optimized asymmetric segmented phase-only correlation filter.
Leonard, I; Alfalou, A; Brosseau, C
2012-05-10
We suggest a new type of optimized composite filter, i.e., the asymmetric segmented phase-only filter (ASPOF), for improving the effectiveness of a VanderLugt correlator (VLC) when used for face identification. Basically, it consists in merging several reference images after application of a specific spectral optimization method. After segmentation of the spectral filter plane to several areas, each area is assigned to a single winner reference according to a new optimized criterion. The point of the paper is to show that this method offers a significant performance improvement on standard composite filters for face identification. We first briefly revisit composite filters [adapted, phase-only, inverse, compromise optimal, segmented, minimum average correlation energy, optimal trade-off maximum average correlation, and amplitude-modulated phase-only (AMPOF)], which are tools of choice for face recognition based on correlation techniques, and compare their performances with those of the ASPOF. We illustrate some of the drawbacks of current filters for several binary and grayscale image identifications. Next, we describe the optimization steps and introduce the ASPOF that can overcome these technical issues to improve the quality and the reliability of the correlation-based decision. We derive performance measures, i.e., PCE values and receiver operating characteristic curves, to confirm consistency of the results. We numerically find that this filter increases the recognition rate and decreases the false alarm rate. The results show that the discrimination of the ASPOF is comparable to that of the AMPOF, but the ASPOF is more robust than the trade-off maximum average correlation height against rotation and various types of noise sources. Our method has several features that make it amenable to experimental implementation using a VLC.
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.
Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
NASA Technical Reports Server (NTRS)
Lei, Shaw-Min; Yao, Kung
1990-01-01
A class of infinite impulse response (IIR) digital filters with a systolizable structure is proposed and its synthesis is investigated. The systolizable structure consists of pipelineable regular modules with local connections and is suitable for VLSI implementation. It is capable of achieving high performance as well as high throughput. This class of filter structure provides certain degrees of freedom that can be used to obtain some desirable properties for the filter. Techniques of evaluating the internal signal powers and the output roundoff noise of the proposed filter structure are developed. Based upon these techniques, a well-scaled IIR digital filter with minimum output roundoff noise is designed using a local optimization approach. The internal signals of all the modes of this filter are scaled to unity in the l2-norm sense. Compared to the Rao-Kailath (1984) orthogonal digital filter and the Gray-Markel (1973) normalized-lattice digital filter, this filter has better scaling properties and lower output roundoff noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carton, Ann-Katherine; Ullberg, Christer; Lindman, Karin
2010-11-15
Purpose: Dual-energy (DE) iodine contrast-enhanced x-ray imaging of the breast has been shown to identify cancers that would otherwise be mammographically occult. In this article, theoretical modeling was performed to obtain optimally enhanced iodine images for a photon-counting digital breast tomosynthesis (DBT) system using a DE acquisition technique. Methods: In the system examined, the breast is scanned with a multislit prepatient collimator aligned with a multidetector camera. Each detector collects a projection image at a unique angle during the scan. Low-energy (LE) and high-energy (HE) projection images are acquired simultaneously in a single scan by covering alternate collimator slits withmore » Sn and Cu filters, respectively. Sn filters ranging from 0.08 to 0.22 mm thickness and Cu filters from 0.11 to 0.27 mm thickness were investigated. A tube voltage of 49 kV was selected. Tomographic images, hereafter referred to as DBT images, were reconstructed using a shift-and-add algorithm. Iodine-enhanced DBT images were acquired by performing a weighted logarithmic subtraction of the HE and LE DBT images. The DE technique was evaluated for 20-80 mm thick breasts. Weighting factors, w{sub t}, that optimally cancel breast tissue were computed. Signal-difference-to-noise ratios (SDNRs) between iodine-enhanced and nonenhanced breast tissue normalized to the square root of the mean glandular dose (MGD) were computed as a function of the fraction of the MGD allocated to the HE images. Peak SDNR/{radical}(MGD) and optimal dose allocations were identified. SDNR/{radical}(MGD) and dose allocations were computed for several practical feasible system configurations (i.e., determined by the number of collimator slits covered by Sn and Cu). A practical system configuration and Sn-Cu filter pair that accounts for the trade-off between SDNR, tube-output, and MGD were selected. Results: w{sub t} depends on the Sn-Cu filter combination used, as well as on the breast thickness; to optimally cancel 0% with 50% glandular breast tissue, w{sub t} values were found to range from 0.46 to 0.72 for all breast thicknesses and Sn-Cu filter pairs studied. The optimal w{sub t} values needed to cancel all possible breast tissue glandularites vary by less than 1% for 20 mm thick breasts and 18% for 80 mm breasts. The system configuration where one collimator slit covered by Sn is alternated with two collimator slits covered by Cu delivers SDNR/{radical}(MGD) nearest to the peak value. A reasonable compromise is a 0.16 mm Sn-0.23 mm Cu filter pair, resulting in SDNR values between 1.64 and 0.61 and MGD between 0.70 and 0.53 mGy for 20-80 mm thick breasts at the maximum tube current. Conclusions: A DE acquisition technique for a photon-counting DBT imaging system has been developed and optimized.« less
An ultra-low-power filtering technique for biomedical applications.
Zhang, Tan-Tan; Mak, Pui-In; Vai, Mang-I; Mak, Peng-Un; Wan, Feng; Martins, R P
2011-01-01
This paper describes an ultra-low-power filtering technique for biomedical applications designated as T-wave sensing in heart-activities detection systems. The topology is based on a source-follower-based Biquad operating in the sub-threshold region. With the intrinsic advantages of simplicity and high linearity of the source-follower, ultra-low-cutoff filtering can be achieved, simultaneously with ultra low power and good linearity. An 8(th)-order 2.4-Hz lowpass filter design example optimized in a 0.35-μm CMOS process was designed achieving over 85-dB dynamic range, 74-dB stopband attenuation and consuming only 0.36 nW at a 3-V supply.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Wesley; Sattarivand, Mike
Objective: To optimize dual-energy parameters of ExacTrac stereoscopic x-ray imaging system for lung SBRT patients Methods: Simulated spectra and a lung phantom were used to optimize filter material, thickness, kVps, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number (Z) range [3–83] based on a metric defined to separate spectrums of high and low energies. Both energies used the same filter due to time constraints of image acquisition in lung SBRT imaging. A lung phantom containing bone, soft tissue, and a tumor mimicking material was imaged with filter thicknessesmore » range [0–1] mm and kVp range [60–140]. A cost function based on contrast-to-noise-ratio of bone, soft tissue, and tumor, as well as image noise content, was defined to optimize filter thickness and kVp. Using the optimized parameters, dual-energy images of anthropomorphic Rando phantom were acquired and evaluated for bone subtraction. Imaging dose was measured with dual-energy technique using tin filtering. Results: Tin was the material of choice providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-only image in the lung phantom was obtained using 0.3 mm tin and [140, 80] kVp pair. Dual-energy images of the Rando phantom had noticeable bone elimination when compared to no filtration. Dose was lower with tin filtering compared to no filtration. Conclusions: Dual-energy soft-tissue imaging is feasible using ExacTrac stereoscopic imaging system utilizing a single tin filter for both high and low energies and optimized acquisition parameters.« less
Adapted all-numerical correlator for face recognition applications
NASA Astrophysics Data System (ADS)
Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.
2013-03-01
In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.
Least-mean-square spatial filter for IR sensors.
Takken, E H; Friedman, D; Milton, A F; Nitzberg, R
1979-12-15
A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.
Optimal exposure techniques for iodinated contrast enhanced breast CT
NASA Astrophysics Data System (ADS)
Glick, Stephen J.; Makeev, Andrey
2016-03-01
Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d') figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.
Optimal Recursive Digital Filters for Active Bending Stabilization
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2013-01-01
In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Optimality study of a gust alleviation system for light wing-loading STOL aircraft
NASA Technical Reports Server (NTRS)
Komoda, M.
1976-01-01
An analytical study was made of an optimal gust alleviation system that employs a vertical gust sensor mounted forward of an aircraft's center of gravity. Frequency domain optimization techniques were employed to synthesize the optimal filters that process the corrective signals to the flaps and elevator actuators. Special attention was given to evaluating the effectiveness of lead time, that is, the time by which relative wind sensor information should lead the actual encounter of the gust. The resulting filter is expressed as an implicit function of the prescribed control cost. A numerical example for a light wing loading STOL aircraft is included in which the optimal trade-off between performance and control cost is systematically studied.
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
Optimally Distributed Kalman Filtering with Data-Driven Communication †
Dormann, Katharina
2018-01-01
For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392
NASA Technical Reports Server (NTRS)
Rajan, P. K.; Khan, Ajmal
1993-01-01
Spatial light modulators (SLMs) are being used in correlation-based optical pattern recognition systems to implement the Fourier domain filters. Currently available SLMs have certain limitations with respect to the realizability of these filters. Therefore, it is necessary to incorporate the SLM constraints in the design of the filters. The design of a SLM-constrained minimum average correlation energy (SLM-MACE) filter using the simulated annealing-based optimization technique was investigated. The SLM-MACE filter was synthesized for three different types of constraints. The performance of the filter was evaluated in terms of its recognition (discrimination) capabilities using computer simulations. The correlation plane characteristics of the SLM-MACE filter were found to be reasonably good. The SLM-MACE filter yielded far better results than the analytical MACE filter implemented on practical SLMs using the constrained magnitude technique. Further, the filter performance was evaluated in the presence of noise in the input test images. This work demonstrated the need to include the SLM constraints in the filter design. Finally, a method is suggested to reduce the computation time required for the synthesis of the SLM-MACE filter.
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gill, K; Aldoohan, S; Collier, J
Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measuremore » CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.« less
NASA Astrophysics Data System (ADS)
Jena, D. P.; Panigrahi, S. N.
2016-03-01
Requirement of designing a sophisticated digital band-pass filter in acoustic based condition monitoring has been eliminated by introducing a passive acoustic filter in the present work. So far, no one has attempted to explore the possibility of implementing passive acoustic filters in acoustic based condition monitoring as a pre-conditioner. In order to enhance the acoustic based condition monitoring, a passive acoustic band-pass filter has been designed and deployed. Towards achieving an efficient band-pass acoustic filter, a generalized design methodology has been proposed to design and optimize the desired acoustic filter using multiple filter components in series. An appropriate objective function has been identified for genetic algorithm (GA) based optimization technique with multiple design constraints. In addition, the sturdiness of the proposed method has been demonstrated in designing a band-pass filter by using an n-branch Quincke tube, a high pass filter and multiple Helmholtz resonators. The performance of the designed acoustic band-pass filter has been shown by investigating the piston-bore defect of a motor-bike using engine noise signature. On the introducing a passive acoustic filter in acoustic based condition monitoring reveals the enhancement in machine learning based fault identification practice significantly. This is also a first attempt of its own kind.
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters.
Bindima, Thayyil; Elias, Elizabeth
2016-05-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters
Bindima, Thayyil; Elias, Elizabeth
2016-01-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739
Saito, Masatoshi
2007-11-01
Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity-in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:T1 scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components-acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.
Pan, Y; Zhao, J; Mei, J; Shao, M; Zhang, J; Wu, H
2016-12-01
The incidence of thrombus was investigated within retrievable filters placed in trauma patients with confirmed DVT at the time of retrieval and the optimal treatment for this clinical scenario was assessed. A technique called "filter retrieval with manual negative pressure aspiration thrombectomy" for management of filter thrombus was introduced and assessed. The retrievable filters referred for retrieval between January 2008 and December 2015 were retrospectively reviewed to determine the incidence of filter thrombus on a pre-retrieval cavogram. The clinical outcomes of different managements for thrombus within filters were recorded and analyzed. During the study 764 patients having Aegisy Filters implanted were referred for filter removal, from which 236 cases (134 male patients, mean age 50.2 years) of thrombus within the filter were observed on initial pre-retrieval IVC venogram 12-39 days after insertion (average 16.9 days). The incidence of infra-filter thrombus was 30.9%, and complete occlusion of the filter bearing IVC was seen in 2.4% (18) of cases. Retrieval was attempted in all 121 cases with small clots using a regular snare and sheath technique, and was successful in 120. A total of 116 cases with massive thrombus and IVC occlusion by thrombus were treated by CDT and/or the new retrieval technique. Overall, 213 cases (90.3%) of thrombus in the filter were removed successfully without PE. A small thrombus within the filter can be safely removed without additional management. CDT for reduction of the clot burden in filters was effective and safe. Filter retrieval with manual negative pressure aspiration thrombectomy seemed reasonable and valuable for management of massive thrombus within filters in some patients. Full assessment of the value and safety of this technique requires additional studies. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun
2015-01-01
Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.
Filtered epithermal quasi-monoenergetic neutron beams at research reactor facilities.
Mansy, M S; Bashter, I I; El-Mesiry, M S; Habib, N; Adib, M
2015-03-01
Filtered neutron techniques were applied to produce quasi-monoenergetic neutron beams in the energy range of 1.5-133keV at research reactors. A simulation study was performed to characterize the filter components and transmitted beam lines. The filtered beams were characterized in terms of the optimal thickness of the main and additive components. The filtered neutron beams had high purity and intensity, with low contamination from the accompanying thermal emission, fast neutrons and γ-rays. A computer code named "QMNB" was developed in the "MATLAB" programming language to perform the required calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ofek, Eran O.; Zackay, Barak
2018-04-01
Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.
SkyMapper Filter Set: Design and Fabrication of Large-Scale Optical Filters
NASA Astrophysics Data System (ADS)
Bessell, Michael; Bloxham, Gabe; Schmidt, Brian; Keller, Stefan; Tisserand, Patrick; Francis, Paul
2011-07-01
The SkyMapper Southern Sky Survey will be conducted from Siding Spring Observatory with u, v, g, r, i, and z filters that comprise glued glass combination filters with dimensions of 309 × 309 × 15 mm. In this article we discuss the rationale for our bandpasses and physical characteristics of the filter set. The u, v, g, and z filters are entirely glass filters, which provide highly uniform bandpasses across the complete filter aperture. The i filter uses glass with a short-wave pass coating, and the r filter is a complete dielectric filter. We describe the process by which the filters were constructed, including the processes used to obtain uniform dielectric coatings and optimized narrowband antireflection coatings, as well as the technique of gluing the large glass pieces together after coating using UV transparent epoxy cement. The measured passbands, including extinction and CCD QE, are presented.
Angland, P.; Haberberger, D.; Ivancic, S. T.; ...
2017-10-30
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angland, P.; Haberberger, D.; Ivancic, S. T.
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less
NASA Astrophysics Data System (ADS)
Wu, Yuechen; Chrysler, Benjamin; Kostuk, Raymond K.
2018-01-01
The technique of designing, optimizing, and fabricating broadband volume transmission holograms using dichromate gelatin (DCG) is summarized for solar spectrum-splitting applications. The spectrum-splitting photovoltaic (PV) system uses a series of single-bandgap PV cells that have different spectral conversion efficiency properties to more fully utilize the solar spectrum. In such a system, one or more high-performance optical filters are usually required to split the solar spectrum and efficiently send them to the corresponding PV cells. An ideal spectral filter should have a rectangular shape with sharp transition wavelengths. A methodology of designing and modeling a transmission DCG hologram using coupled wave analysis for different PV bandgap combinations is described. To achieve a broad diffraction bandwidth and sharp cutoff wavelength, a cascaded structure of multiple thick holograms is described. A search algorithm is then developed to optimize both single- and two-layer cascaded holographic spectrum-splitting elements for the best bandgap combinations of two- and three-junction spectrum-splitting photovoltaic (SSPV) systems illuminated under the AM1.5 solar spectrum. The power conversion efficiencies of the optimized systems are found to be 42.56% and 48.41%, respectively, using the detailed balance method, and show an improvement compared with a tandem multijunction system. A fabrication method for cascaded DCG holographic filters is also described and used to prototype the optimized filter for the three-junction SSPV system.
Silicon oxide nanoparticles doped PQ-PMMA for volume holographic imaging filters.
Luo, Yuan; Russo, Juan M; Kostuk, Raymond K; Barbastathis, George
2010-04-15
Holographic imaging filters are required to have high Bragg selectivity, namely, narrow angular and spectral bandwidth, to obtain spatial-spectral information within a three-dimensional object. In this Letter, we present the design of holographic imaging filters formed using silicon oxide nanoparticles (nano-SiO(2)) in phenanthrenquinone-poly(methyl methacrylate) (PQ-PMMA) polymer recording material. This combination offers greater Bragg selectivity and increases the diffraction efficiency of holographic filters. The holographic filters with optimized ratio of nano-SiO(2) in PQ-PMMA can significantly improve the performance of Bragg selectivity and diffraction efficiency by 53% and 16%, respectively. We present experimental results and data analysis demonstrating this technique in use for holographic spatial-spectral imaging filters.
Selection vector filter framework
NASA Astrophysics Data System (ADS)
Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.
2003-10-01
We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.
An estimator-predictor approach to PLL loop filter design
NASA Technical Reports Server (NTRS)
Statman, J. I.; Hurd, W. J.
1986-01-01
An approach to the design of digital phase locked loops (DPLLs), using estimation theory concepts in the selection of a loop filter, is presented. The key concept is that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor. The estimator provides recursive estimates of phase, frequency, and higher order derivatives, while the predictor compensates for the transport lag inherent in the loop. This decomposition results in a straightforward loop filter design procedure, enabling use of techniques from optimal and sub-optimal estimation theory. A design example for a particular choice of estimator is presented, followed by analysis of the associated bandwidth, gain margin, and steady state errors caused by unmodeled dynamics. This approach is under consideration for the design of the Deep Space Network (DSN) Advanced Receiver Carrier DPLL.
Optimized digital filtering techniques for radiation detection with HPGe detectors
NASA Astrophysics Data System (ADS)
Salathe, Marco; Kihm, Thomas
2016-02-01
This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of 1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.
Eichmiller, Jessica J; Miller, Loren M; Sorensen, Peter W
2016-01-01
Few studies have examined capture and extraction methods for environmental DNA (eDNA) to identify techniques optimal for detection and quantification. In this study, precipitation, centrifugation and filtration eDNA capture methods and six commercially available DNA extraction kits were evaluated for their ability to detect and quantify common carp (Cyprinus carpio) mitochondrial DNA using quantitative PCR in a series of laboratory experiments. Filtration methods yielded the most carp eDNA, and a glass fibre (GF) filter performed better than a similar pore size polycarbonate (PC) filter. Smaller pore sized filters had higher regression slopes of biomass to eDNA, indicating that they were potentially more sensitive to changes in biomass. Comparison of DNA extraction kits showed that the MP Biomedicals FastDNA SPIN Kit yielded the most carp eDNA and was the most sensitive for detection purposes, despite minor inhibition. The MoBio PowerSoil DNA Isolation Kit had the lowest coefficient of variation in extraction efficiency between lake and well water and had no detectable inhibition, making it most suitable for comparisons across aquatic environments. Of the methods tested, we recommend using a 1.5 μm GF filter, followed by extraction with the MP Biomedicals FastDNA SPIN Kit for detection. For quantification of eDNA, filtration through a 0.2-0.6 μm pore size PC filter, followed by extraction with MoBio PowerSoil DNA Isolation Kit was optimal. These results are broadly applicable for laboratory studies on carps and potentially other cyprinids. The recommendations can also be used to inform choice of methodology for field studies. © 2015 John Wiley & Sons Ltd.
Principal Component Noise Filtering for NAST-I Radiometric Calibration
NASA Technical Reports Server (NTRS)
Tian, Jialin; Smith, William L., Sr.
2011-01-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.
Belavkin filter for mixture of quadrature and photon counting process with some control techniques
NASA Astrophysics Data System (ADS)
Garg, Naman; Parthasarathy, Harish; Upadhyay, D. K.
2018-03-01
The Belavkin filter for the H-P Schrödinger equation is derived when the measurement process consists of a mixture of quantum Brownian motions and conservation/Poisson process. Higher-order powers of the measurement noise differentials appear in the Belavkin dynamics. For simulation, we use a second-order truncation. Control of the Belavkin filtered state by infinitesimal unitary operators is achieved in order to reduce the noise effects in the Belavkin filter equation. This is carried out along the lines of Luc Bouten. Various optimization criteria for control are described like state tracking and Lindblad noise removal.
Computer image processing - The Viking experience. [digital enhancement techniques
NASA Technical Reports Server (NTRS)
Green, W. B.
1977-01-01
Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.
Robust extrema features for time-series data analysis.
Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N
2013-06-01
The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.
Linear theory for filtering nonlinear multiscale systems with model error
Berry, Tyrus; Harlim, John
2014-01-01
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829
Technology optimization techniques for multicomponent optical band-pass filter manufacturing
NASA Astrophysics Data System (ADS)
Baranov, Yuri P.; Gryaznov, Georgiy M.; Rodionov, Andrey Y.; Obrezkov, Andrey V.; Medvedev, Roman V.; Chivanov, Alexey N.
2016-04-01
Narrowband optical devices (like IR-sensing devices, celestial navigation systems, solar-blind UV-systems and many others) are one of the most fast-growing areas in optical manufacturing. However, signal strength in this type of applications is quite low and performance of devices depends on attenuation level of wavelengths out of operating range. Modern detectors (photodiodes, matrix detectors, photomultiplier tubes and others) usually do not have required selectivity or have higher sensitivity to background spectrum at worst. Manufacturing of a single component band-pass filter with high attenuation level of wavelength is resource-intensive task. Sometimes it's not possible to find solution for this problem using existing technologies. Different types of filters have technology variations of transmittance profile shape due to various production factors. At the same time there are multiple tasks with strict requirements for background spectrum attenuation in narrowband optical devices. For example, in solar-blind UV-system wavelengths above 290-300 nm must be attenuated by 180dB. In this paper techniques of multi-component optical band-pass filters assembly from multiple single elements with technology variations of transmittance profile shape for optimal signal-tonoise ratio (SNR) were proposed. Relationships between signal-to-noise ratio and different characteristics of transmittance profile shape were shown. Obtained practical results were in rather good agreement with our calculations.
Optimal estimation of parameters and states in stochastic time-varying systems with time delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-08-01
In this study estimation of parameters and states in stochastic linear and nonlinear delay differential systems with time-varying coefficients and constant delay is explored. The approach consists of first employing a continuous time approximation to approximate the stochastic delay differential equation with a set of stochastic ordinary differential equations. Then the problem of parameter estimation in the resulting stochastic differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the resulting system, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states.
Directional bilateral filters for smoothing fluorescence microscopy images
NASA Astrophysics Data System (ADS)
Venkatesh, Manasij; Mohan, Kavya; Seelamantula, Chandra Sekhar
2015-08-01
Images obtained through fluorescence microscopy at low numerical aperture (NA) are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR). We also show quantitative improvements in low NA images of F-actin filaments.
NASA Astrophysics Data System (ADS)
Li, Gang; Zhao, Qing
2017-03-01
In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.
Tiley, J S; Viswanathan, G B; Shiveley, A; Tschopp, M; Srinivasan, R; Banerjee, R; Fraser, H L
2010-08-01
Precipitates of the ordered L1(2) gamma' phase (dispersed in the face-centered cubic or FCC gamma matrix) were imaged in Rene 88 DT, a commercial multicomponent Ni-based superalloy, using energy-filtered transmission electron microscopy (EFTEM). Imaging was performed using the Cr, Co, Ni, Ti and Al elemental L-absorption edges in the energy loss spectrum. Manual and automated segmentation procedures were utilized for identification of precipitate boundaries and measurement of precipitate sizes. The automated region growing technique for precipitate identification in images was determined to measure accurately precipitate diameters. In addition, the region growing technique provided a repeatable method for optimizing segmentation techniques for varying EFTEM conditions. (c) 2010 Elsevier Ltd. All rights reserved.
Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems
NASA Astrophysics Data System (ADS)
Yang, Le; Wang, Shuo; Feng, Jianghua
2017-11-01
Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.
Optimal gains for a single polar orbiting satellite
NASA Technical Reports Server (NTRS)
Banfield, Don; Ingersoll, A. P.; Keppenne, C. L.
1993-01-01
Gains are the spatial weighting of an observation in its neighborhood versus the local values of a model prediction. They are the key to data assimilation, as they are the direct measure of how the data are used to guide the model. As derived in the broad context of data assimilation by Kalman and in the context of meteorology, for example, by Rutherford, the optimal gains are functions of the prediction error covariances between the observation and analysis points. Kalman introduced a very powerful technique that allows one to calculate these optimal gains at the time of each observation. Unfortunately, this technique is both computationally expensive and often numerically unstable for dynamical systems of the magnitude of meteorological models, and thus is unsuited for use in PMIRR data assimilation. However, the optimal gains as calculated by a Kalman filter do reach a steady state for regular observing patterns like that of a satellite. In this steady state, the gains are constants in time, and thus could conceivably be computed off-line. These steady-state Kalman gains (i.e., Wiener gains) would yield optimal performance without the computational burden of true Kalman filtering. We proposed to use this type of constant-in-time Wiener gain for the assimilation of data from PMIRR and Mars Observer.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
On optimal infinite impulse response edge detection filters
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1991-01-01
The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.
NASA Astrophysics Data System (ADS)
Disselkamp, R. S.; Kelly, J. F.; Sams, R. L.; Anderson, G. A.
Optical feedback to the laser source in tunable diode laser spectroscopy (TDLS) is known to create intensity modulation noise due to elatoning and optical feedback (i.e. multiplicative technical noise) that usually limits spectral signal-to-noise (S/N). The large technical noise often limits absorption spectroscopy to noise floors 100-fold greater than the Poisson shot noise limit due to fluctuations in the laser intensity. The high output powers generated from quantum cascade (QC) lasers, along with their high gain, makes these injection laser systems especially susceptible to technical noise. In this article we discuss a method of using optimal filtering to reduce technical noise. We have observed S/N enhancements ranging from 20% to a factor of 50. The degree to which optimal filtering enhances S/N depends on the similarity between the Fourier components of the technical noise and those of the signal, with lower S/N enhancements observed for more similar Fourier decompositions of the signal and technical noise. We also examine the linearity of optimal filtered spectra in both time and intensity. This was accomplished by creating a synthetic spectrum for the species being studied (CH4, N2O, CO2 and H2O in ambient air) utilizing line positions and linewidths with an assumed Voigt profile from a commercial database (HITRAN). Agreement better than 0.036% in wavenumber and 1.64% in intensity (up to a 260-fold intensity ratio employed) was observed. Our results suggest that rapid ex post facto digital optimal filtering can be used to enhance S/N for routine trace gas detection.
Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.
McMinn, Brian R
2013-11-01
Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Beitone, C.; Balandraud, X.; Delpueyo, D.; Grédiac, M.
2017-01-01
This paper presents a post-processing technique for noisy temperature maps based on a gradient anisotropic diffusion (GAD) filter in the context of heat source reconstruction. The aim is to reconstruct heat source maps from temperature maps measured using infrared (IR) thermography. Synthetic temperature fields corrupted by added noise are first considered. The GAD filter, which relies on a diffusion process, is optimized to retrieve as well as possible a heat source concentration in a two-dimensional plate. The influence of the dimensions and the intensity of the heat source concentration are discussed. The results obtained are also compared with two other types of filters: averaging filter and Gaussian derivative filter. The second part of this study presents an application for experimental temperature maps measured with an IR camera. The results demonstrate the relevancy of the GAD filter in extracting heat sources from noisy temperature fields.
Optimal focal-plane restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1989-01-01
Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.
Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.
Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R
2016-11-20
Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.
An improved design method based on polyphase components for digital FIR filters
NASA Astrophysics Data System (ADS)
Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No
2017-11-01
This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.
Wavelet filtered shifted phase-encoded joint transform correlation for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new wavelet-filtered-based Shifted- phase-encoded Joint Transform Correlation (WPJTC) technique has been proposed for efficient face recognition. The proposed technique uses discrete wavelet decomposition for preprocessing and can effectively accommodate various 3D facial distortions, effects of noise, and illumination variations. After analyzing different forms of wavelet basis functions, an optimal method has been proposed by considering the discrimination capability and processing speed as performance trade-offs. The proposed technique yields better correlation discrimination compared to alternate pattern recognition techniques such as phase-shifted phase-encoded fringe-adjusted joint transform correlator. The performance of the proposed WPJTC has been tested using the Yale facial database and extended Yale facial database under different environments such as illumination variation, noise, and 3D changes in facial expressions. Test results show that the proposed WPJTC yields better performance compared to alternate JTC based face recognition techniques.
Optimum design of hybrid phase locked loops
NASA Technical Reports Server (NTRS)
Lee, P.; Yan, T.
1981-01-01
The design procedure of phase locked loops is described in which the analog loop filter is replaced by a digital computer. Specific design curves are given for the step and ramp input changes in phase. It is shown that the designed digital filter depends explicitly on the product of the sampling time and the noise bandwidth of the phase locked loop. This technique of optimization can be applied to the design of digital analog loops for other applications.
Low order H∞ optimal control for ACFA blended wing body aircraft
NASA Astrophysics Data System (ADS)
Haniš, T.; Kucera, V.; Hromčík, M.
2013-12-01
Advanced nonconvex nonsmooth optimization techniques for fixed-order H∞ robust control are proposed in this paper for design of flight control systems (FCS) with prescribed structure. Compared to classical techniques - tuning of and successive closures of particular single-input single-output (SISO) loops like dampers, attitude stabilizers, etc. - all loops are designed simultaneously by means of quite intuitive weighting filters selection. In contrast to standard optimization techniques, though (H2, H∞ optimization), the resulting controller respects the prescribed structure in terms of engaged channels and orders (e. g., proportional (P), proportional-integral (PI), and proportional-integralderivative (PID) controllers). In addition, robustness with regard to multimodel uncertainty is also addressed which is of most importance for aerospace applications as well. Such a way, robust controllers for various Mach numbers, altitudes, or mass cases can be obtained directly, based only on particular mathematical models for respective combinations of the §ight parameters.
Optimal CCD readout by digital correlated double sampling
NASA Astrophysics Data System (ADS)
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
Tracking with time-delayed data in multisensor systems
NASA Astrophysics Data System (ADS)
Hilton, Richard D.; Martin, David A.; Blair, William D.
1993-08-01
When techniques for target tracking are expanded to make use of multiple sensors in a multiplatform system, the possibility of time delayed data becomes a reality. When a discrete-time Kalman filter is applied and some of the data entering the filter are delayed, proper processing of these late data is a necessity for obtaining an optimal estimate of a target's state. If this problem is not given special care, the quality of the state estimates can be degraded relative to that quality provided by a single sensor. A negative-time update technique is developed using the criteria of minimum mean-square error (MMSE) under the constraint that only the results of the most recent update are saved. The performance of the MMSE technique is compared to that of the ad hoc approach employed in the Cooperative Engagement Capabilities (CEC) system for processing data from multiple platforms. It was discovered that the MMSE technique is a stable solution to the negative-time update problem, while the CEC technique was found to be less than desirable when used with filters designed for tracking highly maneuvering targets at relatively low data rates. The MMSE negative-time update technique was found to be a superior alternative to the existing CEC negative-time update technique.
Constrained optimization of image restoration filters
NASA Technical Reports Server (NTRS)
Riemer, T. E.; Mcgillem, C. D.
1973-01-01
A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.
Quantification of trace metals in water using complexation and filter concentration.
Dolgin, Bella; Bulatov, Valery; Japarov, Julia; Elish, Eyal; Edri, Elad; Schechter, Israel
2010-06-15
Various metals undergo complexation with organic reagents, resulting in colored products. In practice, their molar absorptivities allow for quantification in the ppm range. However, a proper pre-concentration of the colored complex on paper filter lowers the quantification limit to the low ppb range. In this study, several pre-concentration techniques have been examined and compared: filtering the already complexed mixture, complexation on filter, and dipping of dye-covered filter in solution. The best quantification has been based on the ratio of filter reflectance at a certain wavelength to that at zero metal concentration. The studied complex formations (Ni ions with TAN and Cd ions with PAN) involve production of nanoparticle suspensions, which are associated with complicated kinetics. The kinetics of the complexation of Ni ions with TAN has been investigated and optimum timing could be found. Kinetic optimization in regard to some interferences has also been suggested.
Preliminary design of the spatial filters used in the multipass amplification system of TIL
NASA Astrophysics Data System (ADS)
Zhu, Qihua; Zhang, Xiao Min; Jing, Feng
1998-12-01
The spatial filters are used in Technique Integration Line, which has a multi-pass amplifier, not only to suppress parasitic high spatial frequency modes but also to provide places for inserting a light isolator and injecting the seed beam, and to relay image while the beam passes through the amplifiers several times. To fulfill these functions, the parameters of the spatial filters are optimized by calculations and analyzes with the consideration of avoiding the plasma blow-off effect and components demanding by ghost beam focus. The 'ghost beams' are calculated by ray tracing. A software was developed to evaluate the tolerance of the spatial filters and their components, and to align the whole system on computer simultaneously.
Simulation study of accelerator based quasi-mono-energetic epithermal neutron beams for BNCT.
Adib, M; Habib, N; Bashter, I I; El-Mesiry, M S; Mansy, M S
2016-01-01
Filtered neutron techniques were applied to produce quasi-mono-energetic neutron beams in the energy range of 1.5-7.5 keV at the accelerator port using the generated neutron spectrum from a Li (p, n) Be reaction. A simulation study was performed to characterize the filter components and transmitted beam lines. The feature of the filtered beams is detailed in terms of optimal thickness of the primary and additive components. A computer code named "QMNB-AS" was developed to carry out the required calculations. The filtered neutron beams had high purity and intensity with low contamination from the accompanying thermal, fast neutrons and γ-rays. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effects of pupil filter patterns in line-scan focal modulation microscopy
NASA Astrophysics Data System (ADS)
Shen, Shuhao; Pant, Shilpa; Chen, Rui; Chen, Nanguang
2018-03-01
Line-scan focal modulation microscopy (LSFMM) is an emerging imaging technique that affords high imaging speed and good optical sectioning at the same time. We present a systematic investigation into optimal design of the pupil filter for LSFMM in an attempt to achieve the best performance in terms of spatial resolutions, optical sectioning, and modulation depth. Scalar diffraction theory was used to compute light propagation and distribution in the system and theoretical predictions on system performance, which were then compared with experimental results.
NASA Technical Reports Server (NTRS)
Halyo, N.
1976-01-01
A digital automatic control law to capture a steep glideslope and track the glideslope to a specified altitude is developed for the longitudinal/vertical dynamics of a CTOL aircraft using modern estimation and control techniques. The control law uses a constant gain Kalman filter to process guidance information from the microwave landing system, and acceleration from body mounted accelerometer data. The filter outputs navigation data and wind velocity estimates which are used in controlling the aircraft. Results from a digital simulation of the aircraft dynamics and the control law are presented for various wind conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiyama, H., E-mail: kiyama@meso.t.u-tokyo.ac.jp; Fujita, T.; Teraoka, S.
2014-06-30
Spin filtering with electrically tunable efficiency is achieved for electron tunneling between a quantum dot and spin-resolved quantum Hall edge states by locally gating the two-dimensional electron gas (2DEG) leads near the tunnel junction to the dot. The local gating can change the potential gradient in the 2DEG and consequently the edge state separation. We use this technique to electrically control the ratio of the dot–edge state tunnel coupling between opposite spins and finally increase spin filtering efficiency up to 91%, the highest ever reported, by optimizing the local gating.
The effect of spectral filters on visual search in stroke patients.
Beasley, Ian G; Davies, Leon N
2013-01-01
Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted.
Air-Gapped Structures as Magnetic Elements for Use in Power Processing Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Ohri, A. K.
1977-01-01
Methodical approaches to the design of inductors for use in LC filters and dc-to-dc converters using air gapped magnetic structures are presented. Methods for the analysis and design of full wave rectifier LC filter circuits operating with the inductor current in both the continuous conduction and the discontinuous conduction modes are also described. In the continuous conduction mode, linear circuit analysis techniques are employed, while in the case of the discontinuous mode, the method of analysis requires computer solutions of the piecewise linear differential equations which describe the filter in the time domain. Procedures for designing filter inductors using air gapped cores are presented. The first procedure requires digital computation to yield a design which is optimized in the sense of minimum core volume and minimum number of turns. The second procedure does not yield an optimized design as defined above, but the design can be obtained by hand calculations or with a small calculator. The third procedure is based on the use of specially prepared magnetic core data and provides an easy way to quickly reach a workable design.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm
NASA Astrophysics Data System (ADS)
Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong
2018-06-01
The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.
Functional Near Infrared Spectroscopy: Watching the Brain in Flight
NASA Technical Reports Server (NTRS)
Harrivel, Angela; Hearn, Tristan
2012-01-01
Functional Near Infrared Spectroscopy (fNIRS) is an emerging neurological sensing technique applicable to optimizing human performance in transportation operations, such as commercial aviation. Cognitive state can be determined via pattern classification of functional activations measured with fNIRS. Operational application calls for further development of algorithms and filters for dynamic artifact removal. The concept of using the frequency domain phase shift signal to tune a Kalman filter is introduced to improve the quality of fNIRS signals in realtime. Hemoglobin concentration and phase shift traces were simulated for four different types of motion artifact to demonstrate the filter. Unwanted signal was reduced by at least 43%, and the contrast of the filtered oxygenated hemoglobin signal was increased by more than 100% overall. This filtering method is a good candidate for qualifying fNIRS signals in real time without auxiliary sensors
Functional Near Infrared Spectroscopy: Watching the Brain in Flight
NASA Technical Reports Server (NTRS)
Harrivel, Angela; Hearn, Tristan A.
2012-01-01
Functional Near Infrared Spectroscopy (fNIRS) is an emerging neurological sensing technique applicable to optimizing human performance in transportation operations, such as commercial aviation. Cognitive state can be determined via pattern classification of functional activations measured with fNIRS. Operational application calls for further development of algorithms and filters for dynamic artifact removal. The concept of using the frequency domain phase shift signal to tune a Kalman filter is introduced to improve the quality of fNIRS signals in real-time. Hemoglobin concentration and phase shift traces were simulated for four different types of motion artifact to demonstrate the filter. Unwanted signal was reduced by at least 43%, and the contrast of the filtered oxygenated hemoglobin signal was increased by more than 100% overall. This filtering method is a good candidate for qualifying fNIRS signals in real time without auxiliary sensors.
Microfabrication of three-dimensional filters for liposome extrusion
NASA Astrophysics Data System (ADS)
Baldacchini, Tommaso; Nuñez, Vicente; LaFratta, Christopher N.; Grech, Joseph S.; Vullev, Valentine I.; Zadoyan, Ruben
2015-03-01
Liposomes play a relevant role in the biomedical field of drug delivery. The ability of these lipid vesicles to encapsulate and transport a variety of bioactive molecules has fostered their use in several therapeutic applications, from cancer treatments to the administration of drugs with antiviral activities. Size and uniformity are key parameters to take into consideration when preparing liposomes; these factors greatly influence their effectiveness in both in vitro and in vivo experiments. A popular technique employed to achieve the optimal liposome dimension (around 100 nm in diameter) and uniform size distribution is repetitive extrusion through a polycarbonate filter. We investigated two femtosecond laser direct writing techniques for the fabrication of three-dimensional filters within a microfluidics chip for liposomes extrusion. The miniaturization of the extrusion process in a microfluidic system is the first step toward a complete solution for lab-on-a-chip preparation of liposomes from vesicles self-assembly to optical characterization.
Angular filter refractometry analysis using simulated annealing.
Angland, P; Haberberger, D; Ivancic, S T; Froula, D H
2017-10-01
Angular filter refractometry (AFR) is a novel technique used to characterize the density profiles of laser-produced, long-scale-length plasmas [Haberberger et al., Phys. Plasmas 21, 056304 (2014)]. A new method of analysis for AFR images was developed using an annealing algorithm to iteratively converge upon a solution. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on the minimization of the χ 2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in an average uncertainty in the density profile of 5%-20% in the region of interest.
Control system estimation and design for aerospace vehicles with time delay
NASA Technical Reports Server (NTRS)
Allgaier, G. R.; Williams, T. L.
1972-01-01
The problems of estimation and control of discrete, linear, time-varying systems are considered. Previous solutions to these problems involved either approximate techniques, open-loop control solutions, or results which required excessive computation. The estimation problem is solved by two different methods, both of which yield the identical algorithm for determining the optimal filter. The partitioned results achieve a substantial reduction in computation time and storage requirements over the expanded solution, however. The results reduce to the Kalman filter when no delays are present in the system. The control problem is also solved by two different methods, both of which yield identical algorithms for determining the optimal control gains. The stochastic control is shown to be identical to the deterministic control, thus extending the separation principle to time delay systems. The results obtained reduce to the familiar optimal control solution when no time delays are present in the system.
Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqui; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions. Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz model as well as more realistic models of the means and atmosphere. A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter situations to allow for correct update of the ensemble members. The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to be quite puzzling in that results state estimates are worse than for their filter analogue. In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use the Lorenz model to test and compare the behavior of a variety of implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
Design of minimum multiplier fractional order differentiator based on lattice wave digital filter.
Barsainya, Richa; Rawat, Tarun Kumar; Kumar, Manjeet
2017-01-01
In this paper, a novel design of fractional order differentiator (FOD) based on lattice wave digital filter (LWDF) is proposed which requires minimum number of multiplier for its structural realization. Firstly, the FOD design problem is formulated as an optimization problem using the transfer function of lattice wave digital filter. Then, three optimization algorithms, namely, genetic algorithm (GA), particle swarm optimization (PSO) and cuckoo search algorithm (CSA) are applied to determine the optimal LWDF coefficients. The realization of FOD using LWD structure increases the design accuracy, as only N number of coefficients are to be optimized for Nth order FOD. Finally, two design examples of 3rd and 5th order lattice wave digital fractional order differentiator (LWDFOD) are demonstrated to justify the design accuracy. The performance analysis of the proposed design is carried out based on magnitude response, absolute magnitude error (dB), root mean square (RMS) magnitude error, arithmetic complexity, convergence profile and computation time. Simulation results are attained to show the comparison of the proposed LWDFOD with the published works and it is observed that an improvement of 29% is obtained in the proposed design. The proposed LWDFOD approximates the ideal FOD and surpasses the existing ones reasonably well in mid and high frequency range, thereby making the proposed LWDFOD a promising technique for the design of digital FODs. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
NASA Technical Reports Server (NTRS)
Oakley, Celia M.; Barratt, Craig H.
1990-01-01
Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.
Linear-Quadratic-Gaussian Regulator Developed for a Magnetic Bearing
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.
2002-01-01
Linear-Quadratic-Gaussian (LQG) control is a modern state-space technique for designing optimal dynamic regulators. It enables us to trade off regulation performance and control effort, and to take into account process and measurement noise. The Structural Mechanics and Dynamics Branch at the NASA Glenn Research Center has developed an LQG control for a fault-tolerant magnetic bearing suspension rig to optimize system performance and to reduce the sensor and processing noise. The LQG regulator consists of an optimal state-feedback gain and a Kalman state estimator. The first design step is to seek a state-feedback law that minimizes the cost function of regulation performance, which is measured by a quadratic performance criterion with user-specified weighting matrices, and to define the tradeoff between regulation performance and control effort. The next design step is to derive a state estimator using a Kalman filter because the optimal state feedback cannot be implemented without full state measurement. Since the Kalman filter is an optimal estimator when dealing with Gaussian white noise, it minimizes the asymptotic covariance of the estimation error.
Inverse design of high-Q wave filters in two-dimensional phononic crystals by topology optimization.
Dong, Hao-Wen; Wang, Yue-Sheng; Zhang, Chuanzeng
2017-04-01
Topology optimization of a waveguide-cavity structure in phononic crystals for designing narrow band filters under the given operating frequencies is presented in this paper. We show that it is possible to obtain an ultra-high-Q filter by only optimizing the cavity topology without introducing any other coupling medium. The optimized cavity with highly symmetric resonance can be utilized as the multi-channel filter, raising filter and T-splitter. In addition, most optimized high-Q filters have the Fano resonances near the resonant frequencies. Furthermore, our filter optimization based on the waveguide and cavity, and our simple illustration of a computational approach to wave control in phononic crystals can be extended and applied to design other acoustic devices or even opto-mechanical devices. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2005-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Sun, Lei; Jia, Yun-xian; Cai, Li-ying; Lin, Guo-yu; Zhao, Jin-song
2013-09-01
The spectrometric oil analysis(SOA) is an important technique for machine state monitoring, fault diagnosis and prognosis, and SOA based remaining useful life(RUL) prediction has an advantage of finding out the optimal maintenance strategy for machine system. Because the complexity of machine system, its health state degradation process can't be simply characterized by linear model, while particle filtering(PF) possesses obvious advantages over traditional Kalman filtering for dealing nonlinear and non-Gaussian system, the PF approach was applied to state forecasting by SOA, and the RUL prediction technique based on SOA and PF algorithm is proposed. In the prediction model, according to the estimating result of system's posterior probability, its prior probability distribution is realized, and the multi-step ahead prediction model based on PF algorithm is established. Finally, the practical SOA data of some engine was analyzed and forecasted by the above method, and the forecasting result was compared with that of traditional Kalman filtering method. The result fully shows the superiority and effectivity of the
Digital signal processing the Tevatron BPM signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cancelo, G.; James, E.; Wolbers, S.
2005-05-01
The Beam Position Monitor (TeV BPM) readout system at Fermilab's Tevatron has been updated and is currently being commissioned. The new BPMs use new analog and digital hardware to achieve better beam position measurement resolution. The new system reads signals from both ends of the existing directional stripline pickups to provide simultaneous proton and antiproton measurements. The signals provided by the two ends of the BPM pickups are processed by analog band-pass filters and sampled by 14-bit ADCs at 74.3MHz. A crucial part of this work has been the design of digital filters that process the signal. This paper describesmore » the digital processing and estimation techniques used to optimize the beam position measurement. The BPM electronics must operate in narrow-band and wide-band modes to enable measurements of closed-orbit and turn-by-turn positions. The filtering and timing conditions of the signals are tuned accordingly for the operational modes. The analysis and the optimized result for each mode are presented.« less
Determination of Hg concentration in gases by PIXE
NASA Astrophysics Data System (ADS)
Dutkiewicz, E.; van Kuijen, W. J. P.; Munnik, F.; Mutsaers, P. H. A.; Rokita, E.; de Voigt, M. J. A.
1992-05-01
A method for determination of the concentration of mercury in the gaseous phase is described. In the first step of the method a stable sulphur-mercury complex is formed. For this purpose sulphur is deposited on a filter and the investigated gas flows through the filter. Millipore filters and the deposition of sulphur from Na2S2O3 * 5H2O solution were found to be most suitable. The amount of Hg absorbed on the filter was determined by PIXE or by NAA in the second step of the method. An optimization of proton energy was performed in the PIXE analysis to obtain the maximal signal-to-background ratio. The detection limit of the method, expressed as the minimal amount of Hg which has to flow through the filter equals to 30 and 2 ng for PIXE and NAA techniques, respectively. Applications of the method are also described.
Analysis of Video-Based Microscopic Particle Trajectories Using Kalman Filtering
Wu, Pei-Hsun; Agarwal, Ashutosh; Hess, Henry; Khargonekar, Pramod P.; Tseng, Yiider
2010-01-01
Abstract The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes. PMID:20550894
Filter Media Tests Under Simulated Martian Atmospheric Conditions
NASA Technical Reports Server (NTRS)
Agui, Juan H.
2016-01-01
Human exploration of Mars will require the optimal utilization of planetary resources. One of its abundant resources is the Martian atmosphere that can be harvested through filtration and chemical processes that purify and separate it into its gaseous and elemental constituents. Effective filtration needs to be part of the suite of resource utilization technologies. A unique testing platform is being used which provides the relevant operational and instrumental capabilities to test articles under the proper simulated Martian conditions. A series of tests were conducted to assess the performance of filter media. Light sheet imaging of the particle flow provided a means of detecting and quantifying particle concentrations to determine capturing efficiencies. The media's efficiency was also evaluated by gravimetric means through a by-layer filter media configuration. These tests will help to establish techniques and methods for measuring capturing efficiency and arrestance of conventional fibrous filter media. This paper will describe initial test results on different filter media.
Exploring an optimal wavelet-based filter for cryo-ET imaging.
Huang, Xinrui; Li, Sha; Gao, Song
2018-02-07
Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters.
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Serrano, Alejandro; Godoy, Jorge; Martínez-Álvarez, Antonio; Villagra, Jorge
2017-11-11
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle.
Analysis of signal to noise enhancement using a highly selective modulation tracking filter
NASA Technical Reports Server (NTRS)
Haden, C. R.; Alworth, C. W.
1972-01-01
Experiments are reported which utilize photodielectric effects in semiconductor loaded superconducting resonant circuits for suppressing noise in RF communication systems. The superconducting tunable cavity acts as a narrow band tracking filter for detecting conventional RF signals. Analytical techniques were developed which lead to prediction of signal-to-noise improvements. Progress is reported in optimization of the experimental variables. These include improved Q, new semiconductors, improved optics, and simplification of the electronics. Information bearing signals were passed through the system, and noise was introduced into the computer model.
PAPR reduction in CO-OFDM systems using IPTS and modified clipping and filtering
NASA Astrophysics Data System (ADS)
Tong, Zheng-rong; Hu, Ya-nong; Zhang, Wei-hua
2018-05-01
Aiming at the problem of the peak to average power ratio ( PAPR) in coherent optical orthogonal frequency division multiplexing (CO-OFDM), a hybrid PAPR reduction technique of the CO-OFDM system by combining iterative partial transmit sequence (IPTS) scheme with modified clipping and filtering (MCF) is proposed. The simulation results show that at the complementary cumulative distribution function ( CCDF) of 10-4, the PAPR of proposed scheme is optimized by 1.86 dB and 2.13 dB compared with those of IPTS and CF schemes, respectively. Meanwhile, when the bit error rate ( BER) is 10-3, the optical signal to noise ratio ( OSNR) are optimized by 1.57 dB and 0.66 dB compared with those of CF and IPTS-CF schemes, respectively.
Saito, Masatoshi
2009-08-01
Dual-energy computed tomography (DECT) has the potential for measuring electron density distribution in a human body to predict the range of particle beams for treatment planning in proton or heavy-ion radiotherapy. However, thus far, a practical dual-energy method that can be used to precisely determine electron density for treatment planning in particle radiotherapy has not been developed. In this article, another DECT technique involving a balanced filter method using a conventional x-ray tube is described. For the spectral optimization of DECT using balanced filters, the author calculates beam-hardening error and air kerma required to achieve a desired noise level in electron density and effective atomic number images of a cylindrical water phantom with 50 cm diameter. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimized parameters were applied to cases with different phantom diameters ranging from 5 to 50 cm for the calculations. The author predicts that the optimal combination of tube voltages would be 80 and 140 kV with Tb/Hf and Bi/Mo filter pairs for the 50-cm-diameter water phantom. When a single phantom calibration at a diameter of 25 cm was employed to cover all phantom sizes, maximum absolute beam-hardening errors were 0.3% and 0.03% for electron density and effective atomic number, respectively, over a range of diameters of the water phantom. The beam-hardening errors were 1/10 or less as compared to those obtained by conventional DECT, although the dose was twice that of the conventional DECT case. From the viewpoint of beam hardening and the tube-loading efficiency, the present DECT using balanced filters would be significantly more effective in measuring the electron density than the conventional DECT. Nevertheless, further developments of low-exposure imaging technology should be necessary as well as x-ray tubes with higher outputs to apply DECT coupled with the balanced filter method for clinical use.
The Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics
NASA Technical Reports Server (NTRS)
Zhu, Yanqiu; Cohn, Stephen E.; Todling, Ricardo
1999-01-01
The Kalman filter is the optimal filter in the presence of known Gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions (e.g., Miller 1994). Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz (1963) model as well as more realistic models of the oceans (Evensen and van Leeuwen 1996) and atmosphere (Houtekamer and Mitchell 1998). A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter equations to allow for correct update of the ensemble members (Burgers 1998). The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to quite puzzling in that results of state estimate are worse than for their filter analogue (Evensen 1997). In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use Lorenz (1963) model to test and compare the behavior of a variety implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.
Impulsive noise suppression in color images based on the geodesic digital paths
NASA Astrophysics Data System (ADS)
Smolka, Bogdan; Cyganek, Boguslaw
2015-02-01
In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.
LROC assessment of non-linear filtering methods in Ga-67 SPECT imaging
NASA Astrophysics Data System (ADS)
De Clercq, Stijn; Staelens, Steven; De Beenhouwer, Jan; D'Asseler, Yves; Lemahieu, Ignace
2006-03-01
In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme. The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence. The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter. No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.
Backus, Sterling J [Erie, CO; Kapteyn, Henry C [Boulder, CO
2007-07-10
A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Split-spectrum processing technique for SNR enhancement of ultrasonic guided wave.
Pedram, Seyed Kamran; Fateri, Sina; Gan, Lu; Haig, Alex; Thornicroft, Keith
2018-02-01
Ultrasonic guided wave (UGW) systems are broadly used in several branches of industry where the structural integrity is of concern. In those systems, signal interpretation can often be challenging due to the multi-modal and dispersive propagation of UGWs. This results in degradation of the signals in terms of signal-to-noise ratio (SNR) and spatial resolution. This paper employs the split-spectrum processing (SSP) technique in order to enhance the SNR and spatial resolution of UGW signals using the optimized filter bank parameters in real time scenario for pipe inspection. SSP technique has already been developed for other applications such as conventional ultrasonic testing for SNR enhancement. In this work, an investigation is provided to clarify the sensitivity of SSP performance to the filter bank parameter values for UGWs such as processing bandwidth, filter bandwidth, filter separation and a number of filters. As a result, the optimum values are estimated to significantly improve the SNR and spatial resolution of UGWs. The proposed method is synthetically and experimentally compared with conventional approaches employing different SSP recombination algorithms. The Polarity Thresholding (PT) and PT with Minimization (PTM) algorithms were found to be the best recombination algorithms. They substantially improved the SNR up to 36.9dB and 38.9dB respectively. The outcome of the work presented in this paper paves the way to enhance the reliability of UGW inspections. Copyright © 2017 Elsevier B.V. All rights reserved.
Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters
NASA Astrophysics Data System (ADS)
Barnier, G.; Dunham, E. M.
2016-12-01
Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.
Bayesian Regression with Network Prior: Optimal Bayesian Filtering Perspective
Qian, Xiaoning; Dougherty, Edward R.
2017-01-01
The recently introduced intrinsically Bayesian robust filter (IBRF) provides fully optimal filtering relative to a prior distribution over an uncertainty class ofjoint random process models, whereas formerly the theory was limited to model-constrained Bayesian robust filters, for which optimization was limited to the filters that are optimal for models in the uncertainty class. This paper extends the IBRF theory to the situation where there are both a prior on the uncertainty class and sample data. The result is optimal Bayesian filtering (OBF), where optimality is relative to the posterior distribution derived from the prior and the data. The IBRF theories for effective characteristics and canonical expansions extend to the OBF setting. A salient focus of the present work is to demonstrate the advantages of Bayesian regression within the OBF setting over the classical Bayesian approach in the context otlinear Gaussian models. PMID:28824268
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
An efficient method for removing point sources from full-sky radio interferometric maps
NASA Astrophysics Data System (ADS)
Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard
2017-12-01
A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.
A nowcasting technique based on application of the particle filter blending algorithm
NASA Astrophysics Data System (ADS)
Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai
2017-10-01
To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.
Shuttle filter study. Volume 1: Characterization and optimization of filtration devices
NASA Technical Reports Server (NTRS)
1974-01-01
A program to develop a new technology base for filtration equipment and comprehensive fluid particulate contamination management techniques was conducted. The study has application to the systems used in the space shuttle and space station projects. The scope of the program is as follows: (1) characterization and optimization of filtration devices, (2) characterization of contaminant generation and contaminant sensitivity at the component level, and (3) development of a comprehensive particulate contamination management plane for space shuttle fluid systems.
NASA Astrophysics Data System (ADS)
Vio, R.; Andreani, P.
2016-05-01
The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.
Robotic Vision, Tray-Picking System Design Using Multiple, Optical Matched Filters
NASA Astrophysics Data System (ADS)
Leib, Kenneth G.; Mendelsohn, Jay C.; Grieve, Philip G.
1986-10-01
The optical correlator is applied to a robotic vision, tray-picking problem. Complex matched filters (MFs) are designed to provide sufficient optical memory for accepting any orientation of the desired part, and a multiple holographic lens (MHL) is used to increase the memory for continuous coverage. It is shown that with appropriate thresholding a small part can be selected using optical matched filters. A number of criteria are presented for optimizing the vision system. Two of the part-filled trays that Mendelsohn used are considered in this paper which is the analog (optical) expansion of his paper. Our view in this paper is that of the optical correlator as a cueing device for subsequent, finer vision techniques.
A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows
NASA Astrophysics Data System (ADS)
Meldi, M.; Poux, A.
2017-10-01
A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.
Mini-batch optimized full waveform inversion with geological constrained gradient filtering
NASA Astrophysics Data System (ADS)
Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai
2018-05-01
High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Godoy, Jorge; Martínez-Álvarez, Antonio
2017-01-01
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle. PMID:29137137
NASA Astrophysics Data System (ADS)
Chiari, M.; Yubero, E.; Calzolai, G.; Lucarelli, F.; Crespo, J.; Galindo, N.; Nicolás, J. F.; Giannoni, M.; Nava, S.
2018-02-01
Within the framework of research projects focusing on the sampling and analysis of airborne particulate matter, Particle Induced X-ray Emission (PIXE) and Energy Dispersive X-ray Fluorescence (ED-XRF) techniques are routinely used in many laboratories throughout the world to determine the elemental concentration of the particulate matter samples. In this work an inter-laboratory comparison of the results obtained from analysing several samples (collected on both Teflon and quartz fibre filters) using both techniques is presented. The samples were analysed by PIXE (in Florence, at the 3 MV Tandetron accelerator of INFN-LABEC laboratory) and by XRF (in Elche, using the ARL Quant'X EDXRF spectrometer with specific conditions optimized for specific groups of elements). The results from the two sets of measurements are in good agreement for all the analysed samples, thus validating the use of the ARL Quant'X EDXRF spectrometer and the selected measurement protocol for the analysis of aerosol samples. Moreover, thanks to the comparison of PIXE and XRF results on Teflon and quartz fibre filters, possible self-absorption effects due to the penetration of the aerosol particles inside the quartz fibre-filters were quantified.
Blackout detection as a multiobjective optimization problem.
Chaudhary, A M; Trachtenberg, E A
1991-01-01
We study new fast computational procedures for a pilot blackout (total loss of vision) detection in real time. Their validity is demonstrated by data acquired during experiments with volunteer pilots on a human centrifuge. A new systematic class of very fast suboptimal group filters is employed. The utilization of various inherent group invariancies of signals involved allows us to solve the detection problem via estimation with respect to many performance criteria. The complexity of the procedures in terms of the number of computer operations required for their implementation is investigated. Various classes of such prediction procedures are investigated, analyzed and trade offs are established. Also we investigated the validity of suboptimal filtering using different group filters for different performance criteria, namely: the number of false detections, the number of missed detections, the accuracy of detection and the closeness of all procedures to a certain bench mark technique in terms of dispersion squared (mean square error). The results are compared to recent studies of detection of evoked potentials using estimation. The group filters compare favorably with conventional techniques in many cases with respect to the above mentioned criteria. Their main advantage is the fast computational processing.
Active field control (AFC) -electro-acoustic enhancement system using acoustical feedback control
NASA Astrophysics Data System (ADS)
Miyazaki, Hideo; Watanabe, Takayuki; Kishinaga, Shinji; Kawakami, Fukushi
2003-10-01
AFC is an electro-acoustic enhancement system using FIR filters to optimize auditory impressions, such as liveness, loudness, and spaciousness. This system has been under development at Yamaha Corporation for more than 15 years and has been installed in approximately 50 venues in Japan to date. AFC utilizes feedback control techniques for recreation of reverberation from the physical reverberation of the room. In order to prevent coloration problems caused by a closed loop condition, two types of time-varying control techniques are implemented in the AFC system to ensure smooth loop gain and a sufficient margin in frequency characteristics to prevent instability. Those are: (a) EMR (electric microphone rotator) -smoothing frequency responses between microphones and speakers by changing the combinations of inputs and outputs periodically; (b) fluctuating-FIR -smoothing frequency responses of FIR filters and preventing coloration problems caused by fixed FIR filters, by moving each FIR tap periodically on time axis with a different phase and time period. In this paper, these techniques are summarized. A block diagram of AFC using new equipment named AFC1, which has been developed at Yamaha Corporation and released recently in the US, is also presented.
Kalman Filters for Time Delay of Arrival-Based Source Localization
NASA Astrophysics Data System (ADS)
Klee, Ulrich; Gehrig, Tobias; McDonough, John
2006-12-01
In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.
NASA Astrophysics Data System (ADS)
Zhang, B.; Kumar, S.; Yan, L.-S.; Willner, A. E.
2007-12-01
We demonstrate experimentally >3 dB extinction ratio improvement at the output of SOA-based delayed-interference signal converter (DISC) using optical off-centered filtering. Through careful modeling of the carrier and the phase dynamics, we explain in detail the origin of sub-pulses in the wavelength converted output, with an emphasis on the time-resolved frequency chirping of the output signal. Through our simulations we conclude that the sub-pulses and the main-pulses are oppositely chirped, which is also verified experimentally by analyzing the output with a chirp form analyzer. We propose and demonstrate an optical off-center filtering technique which effectively suppresses these sub-pulses. The effects of filter detuning and phase bias adjustment in the delayed-interferometer are experimentally characterized and optimized, leading to a >3 dB extinction ratio enhancement of the output signal.
A Real-Time De-Noising Algorithm for E-Noses in a Wireless Sensor Network
Qu, Jianfeng; Chai, Yi; Yang, Simon X.
2009-01-01
A wireless e-nose network system is developed for the special purpose of monitoring odorant gases and accurately estimating odor strength in and around livestock farms. This system is to simultaneously acquire accurate odor strength values remotely at various locations, where each node is an e-nose that includes four metal-oxide semiconductor (MOS) gas sensors. A modified Kalman filtering technique is proposed for collecting raw data and de-noising based on the output noise characteristics of those gas sensors. The measurement noise variance is obtained in real time by data analysis using the proposed slip windows average method. The optimal system noise variance of the filter is obtained by using the experiments data. The Kalman filter theory on how to acquire MOS gas sensors data is discussed. Simulation results demonstrate that the proposed method can adjust the Kalman filter parameters and significantly reduce the noise from the gas sensors. PMID:22399946
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Experimentally determined spectral optimization for dedicated breast computed tomography.
Prionas, Nicolas D; Huang, Shih-Ying; Boone, John M
2011-02-01
The current study aimed to experimentally identify the optimal technique factors (x-ray tube potential and added filtration material/thickness) to maximize soft-tissue contrast, microcalcification contrast, and iodine contrast enhancement using cadaveric breast specimens imaged with dedicated breast computed tomography (bCT). Secondarily, the study aimed to evaluate the accuracy of phantom materials as tissue surrogates and to characterize the change in accuracy with varying bCT technique factors. A cadaveric breast specimen was acquired under appropriate approval and scanned using a prototype bCT scanner. Inserted into the specimen were cylindrical inserts of polyethylene, water, iodine contrast medium (iodixanol, 2.5 mg/ml), and calcium hydroxyapatite (100 mg/ml). Six x-ray tube potentials (50, 60, 70, 80, 90, and 100 kVp) and three different filters (0.2 mm Cu, 1.5 mm Al, and 0.2 mm Sn) were tested. For each set of technique factors, the intensity (linear attenuation coefficient) and noise were measured within six regions of interest (ROIs): Glandular tissue, adipose tissue, polyethylene, water, iodine contrast medium, and calcium hydroxyapatite. Dose-normalized contrast to noise ratio (CNRD) was measured for pairwise comparisons among the six ROIs. Regression models were used to estimate the effect of tube potential and added filtration on intensity, noise, and CNRD. Iodine contrast enhancement was maximized using 60 kVp and 0.2 mm Cu. Microcalcification contrast and soft-tissue contrast were maximized at 60 kVp. The 0.2 mm Cu filter achieved significantly higher CNRD for iodine contrast enhancement than the other two filters (p = 0.01), but microcalcification contrast and soft-tissue contrast were similar using the copper and aluminum filters. The average percent difference in linear attenuation coefficient, across all tube potentials, for polyethylene versus adipose tissue was 1.8%, 1.7%, and 1.3% for 0.2 mm Cu, 1.5 mm Al, and 0.2 mm Sn, respectively. For water versus glandular tissue, the average percent difference was 2.7%, 3.9%, and 4.2% for the three filter types. Contrast-enhanced bCT, using injected iodine contrast medium, may be optimized for maximum contrast of enhancing lesions at 60 kVp with 0.2 mm Cu filtration. Soft-tissue contrast and microcalcification contrast may also benefit from lower tube potentials (60 kVp). The linear attenuation coefficients of water and polyethylene slightly overestimate the values of their corresponding tissues, but the reported differences may serve as guidance for dosimetry and quality assurance using tissue equivalent phantoms.
Optimized suppression of coherent noise from seismic data using the Karhunen-Loève transform
NASA Astrophysics Data System (ADS)
Montagne, Raúl; Vasconcelos, Giovani L.
2006-07-01
Signals obtained in land seismic surveys are usually contaminated with coherent noise, among which the ground roll (Rayleigh surface waves) is of major concern for it can severely degrade the quality of the information obtained from the seismic record. This paper presents an optimized filter based on the Karhunen-Loève transform for processing seismic images contaminated with ground roll. In this method, the contaminated region of the seismic record, to be processed by the filter, is selected in such way as to correspond to the maximum of a properly defined coherence index. The main advantages of the method are that the ground roll is suppressed with negligible distortion of the remnant reflection signals and that the filtering procedure can be automated. The image processing technique described in this study should also be relevant for other applications where coherent structures embedded in a complex spatiotemporal pattern need to be identified in a more refined way. In particular, it is argued that the method is appropriate for processing optical coherence tomography images whose quality is often degraded by coherent noise (speckle).
Delineation and geometric modeling of road networks
NASA Astrophysics Data System (ADS)
Poullis, Charalambos; You, Suya
In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.
Wu, Huafeng; Mei, Xiaojun; Chen, Xinqiang; Li, Junjun; Wang, Jun; Mohapatra, Prasant
2018-07-01
Maritime search and rescue (MSR) play a significant role in Safety of Life at Sea (SOLAS). However, it suffers from scenarios that the measurement information is inaccurate due to wave shadow effect when utilizing wireless Sensor Network (WSN) technology in MSR. In this paper, we develop a Novel Cooperative Localization Algorithm (NCLA) in MSR by using an enhanced particle filter method to reduce measurement errors on observation model caused by wave shadow effect. First, we take into account the mobility of nodes at sea to develop a motion model-Lagrangian model. Furthermore, we introduce both state model and observation model to constitute a system model for particle filter (PF). To address the impact of the wave shadow effect on the observation model, we develop an optimal parameter derived by Kullback-Leibler divergence (KLD) to mitigate the error. After the optimal parameter is acquired, an improved likelihood function is presented. Finally, the estimated position is acquired. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Neural network river forecasting through baseflow separation and binary-coded swarm optimization
NASA Astrophysics Data System (ADS)
Taormina, Riccardo; Chau, Kwok-Wing; Sivakumar, Bellie
2015-10-01
The inclusion of expert knowledge in data-driven streamflow modeling is expected to yield more accurate estimates of river quantities. Modular models (MMs) designed to work on different parts of the hydrograph are preferred ways to implement such approach. Previous studies have suggested that better predictions of total streamflow could be obtained via modular Artificial Neural Networks (ANNs) trained to perform an implicit baseflow separation. These MMs fit separately the baseflow and excess flow components as produced by a digital filter, and reconstruct the total flow by adding these two signals at the output. The optimization of the filter parameters and ANN architectures is carried out through global search techniques. Despite the favorable premises, the real effectiveness of such MMs has been tested only on a few case studies, and the quality of the baseflow separation they perform has never been thoroughly assessed. In this work, we compare the performance of MM against global models (GMs) for nine different gaging stations in the northern United States. Binary-coded swarm optimization is employed for the identification of filter parameters and model structure, while Extreme Learning Machines, instead of ANN, are used to drastically reduce the large computational times required to perform the experiments. The results show that there is no evidence that MM outperform global GM for predicting the total flow. In addition, the baseflow produced by the MM largely underestimates the actual baseflow component expected for most of the considered gages. This occurs because the values of the filter parameters maximizing overall accuracy do not reflect the geological characteristics of the river basins. The results indeed show that setting the filter parameters according to expert knowledge results in accurate baseflow separation but lower accuracy of total flow predictions, suggesting that these two objectives are intrinsically conflicting rather than compatible.
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
Optimization of In-Cylinder Pressure Filter for Engine Research
2017-06-01
ARL-TR-8034 ● JUN 2017 US Army Research Laboratory Optimization of In-Cylinder Pressure Filter for Engine Research by Kenneth...Laboratory Optimization of In-Cylinder Pressure Filter for Engine Research by Kenneth S Kim, Michael T Szedlmayer, Kurt M Kruger, and Chol-Bum M...
Wiener filtering of the COBE Differential Microwave Radiometer data
NASA Technical Reports Server (NTRS)
Bunn, Emory F.; Fisher, Karl B.; Hoffman, Yehuda; Lahav, Ofer; Silk, Joseph; Zaroubi, Saleem
1994-01-01
We derive an optimal linear filter to suppress the noise from the cosmic background explorer satellite (COBE) Differential Microwave Radiometer (DMR) sky maps for a given power spectrum. We then apply the filter to the first-year DMR data, after removing pixels within 20 deg of the Galactic plane from the data. We are able to identify particular hot and cold spots in the filtered maps at a level 2 to 3 times the noise level. We use the formalism of constrained realizations of Gaussian random fields to assess the uncertainty in the filtered sky maps. In addition to improving the signal-to-noise ratio of the map as a whole, these techniques allow us to recover some information about the cosmic microwave background anisotropy in the missing Galactic plane region. From these maps we are able to determine which hot and cold spots in the data are statistically significant, and which may have been produced by noise. In addition, the filtered maps can be used for comparison with other experiments on similar angular scales.
GPU Accelerated Vector Median Filter
NASA Technical Reports Server (NTRS)
Aras, Rifat; Shen, Yuzhong
2011-01-01
Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .
Generalized Optimal-State-Constraint Extended Kalman Filter (OSC-EKF)
2017-02-01
ARL-TR-7948• FEB 2017 US Army Research Laboratory GeneralizedOptimal-State-Constraint ExtendedKalman Filter (OSC-EKF) by James M Maley, Kevin...originator. ARL-TR-7948• FEB 2017 US Army Research Laboratory GeneralizedOptimal-State-Constraint ExtendedKalman Filter (OSC-EKF) by James M Maley Weapons and...
A Low Cost Structurally Optimized Design for Diverse Filter Types
Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar
2016-01-01
A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image processing applications especially in a constraint environment. PMID:27832133
IDENTIFYING IONIZED REGIONS IN NOISY REDSHIFTED 21 cm DATA SETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malloy, Matthew; Lidz, Adam, E-mail: mattma@sas.upenn.edu
One of the most promising approaches for studying reionization is to use the redshifted 21 cm line. Early generations of redshifted 21 cm surveys will not, however, have the sensitivity to make detailed maps of the reionization process, and will instead focus on statistical measurements. Here, we show that it may nonetheless be possible to directly identify ionized regions in upcoming data sets by applying suitable filters to the noisy data. The locations of prominent minima in the filtered data correspond well with the positions of ionized regions. In particular, we corrupt semi-numeric simulations of the redshifted 21 cm signalmore » during reionization with thermal noise at the level expected for a 500 antenna tile version of the Murchison Widefield Array (MWA), and mimic the degrading effects of foreground cleaning. Using a matched filter technique, we find that the MWA should be able to directly identify ionized regions despite the large thermal noise. In a plausible fiducial model in which {approx}20% of the volume of the universe is neutral at z {approx} 7, we find that a 500-tile MWA may directly identify as many as {approx}150 ionized regions in a 6 MHz portion of its survey volume and roughly determine the size of each of these regions. This may, in turn, allow interesting multi-wavelength follow-up observations, comparing galaxy properties inside and outside of ionized regions. We discuss how the optimal configuration of radio antenna tiles for detecting ionized regions with a matched filter technique differs from the optimal design for measuring power spectra. These considerations have potentially important implications for the design of future redshifted 21 cm surveys.« less
Galievsky, Victor A; Stasheuski, Alexander S; Krylov, Sergey N
2017-10-17
The limit-of-detection (LOD) in analytical instruments with fluorescence detection can be improved by reducing noise of optical background. Efficiently reducing optical background noise in systems with spectrally nonuniform background requires complex optimization of an emission filter-the main element of spectral filtration. Here, we introduce a filter-optimization method, which utilizes an expression for the signal-to-noise ratio (SNR) as a function of (i) all noise components (dark, shot, and flicker), (ii) emission spectrum of the analyte, (iii) emission spectrum of the optical background, and (iv) transmittance spectrum of the emission filter. In essence, the noise components and the emission spectra are determined experimentally and substituted into the expression. This leaves a single variable-the transmittance spectrum of the filter-which is optimized numerically by maximizing SNR. Maximizing SNR provides an accurate way of filter optimization, while a previously used approach based on maximizing a signal-to-background ratio (SBR) is the approximation that can lead to much poorer LOD specifically in detection of fluorescently labeled biomolecules. The proposed filter-optimization method will be an indispensable tool for developing new and improving existing fluorescence-detection systems aiming at ultimately low LOD.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
NASA Astrophysics Data System (ADS)
Gadsden, S. Andrew; Kirubarajan, T.
2017-05-01
Signal processing techniques are prevalent in a wide range of fields: control, target tracking, telecommunications, robotics, fault detection and diagnosis, and even stock market analysis, to name a few. Although first introduced in the 1950s, the most popular method used for signal processing and state estimation remains the Kalman filter (KF). The KF offers an optimal solution to the estimation problem under strict assumptions. Since this time, a number of other estimation strategies and filters were introduced to overcome robustness issues, such as the smooth variable structure filter (SVSF). In this paper, properties of the SVSF are explored in an effort to detect and diagnosis faults in an electromechanical system. The results are compared with the KF method, and future work is discussed.
NASA Astrophysics Data System (ADS)
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.
Siddiqui, Hasib; Bouman, Charles A
2007-03-01
Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.
Nonlinear Estimation With Sparse Temporal Measurements
2016-09-01
Kalman filter , the extended Kalman filter (EKF) and unscented Kalman filter (UKF) are commonly used in practical application. The Kalman filter is an...optimal estimator for linear systems; the EKF and UKF are sub-optimal approximations of the Kalman filter . The EKF uses a first-order Taylor series...propagated covariance is compared for similarity with a Monte Carlo propagation. The similarity of the covariance matrices is shown to predict filter
Bioaerosol DNA Extraction Technique from Air Filters Collected from Marine and Freshwater Locations
NASA Astrophysics Data System (ADS)
Beckwith, M.; Crandall, S. G.; Barnes, A.; Paytan, A.
2015-12-01
Bioaerosols are composed of microorganisms suspended in air. Among these organisms include bacteria, fungi, virus, and protists. Microbes introduced into the atmosphere can drift, primarily by wind, into natural environments different from their point of origin. Although bioaerosols can impact atmospheric dynamics as well as the ecology and biogeochemistry of terrestrial systems, very little is known about the composition of bioaerosols collected from marine and freshwater environments. The first step to determine composition of airborne microbes is to successfully extract environmental DNA from air filters. We asked 1) can DNA be extracted from quartz (SiO2) air filters? and 2) how can we optimize the DNA yield for downstream metagenomic sequencing? Aerosol filters were collected and archived on a weekly basis from aquatic sites (USA, Bermuda, Israel) over the course of 10 years. We successfully extracted DNA from a subsample of ~ 20 filters. We modified a DNA extraction protocol (Qiagen) by adding a beadbeating step to mechanically shear cell walls in order to optimize our DNA product. We quantified our DNA yield using a spectrophotometer (Nanodrop 1000). Results indicate that DNA can indeed be extracted from quartz filters. The additional beadbeating step helped increase our yield - up to twice as much DNA product was obtained compared to when this step was omitted. Moreover, bioaerosol DNA content does vary across time. For instance, the DNA extracted from filters from Lake Tahoe, USA collected near the end of June decreased from 9.9 ng/μL in 2007 to 3.8 ng/μL in 2008. Further next-generation sequencing analysis of our extracted DNA will be performed to determine the composition of these microbes. We will also model the meteorological and chemical factors that are good predictors for microbial composition for our samples over time and space.
Optimal noise reduction in 3D reconstructions of single particles using a volume-normalized filter
Sindelar, Charles V.; Grigorieff, Nikolaus
2012-01-01
The high noise level found in single-particle electron cryo-microscopy (cryo-EM) image data presents a special challenge for three-dimensional (3D) reconstruction of the imaged molecules. The spectral signal-to-noise ratio (SSNR) and related Fourier shell correlation (FSC) functions are commonly used to assess and mitigate the noise-generated error in the reconstruction. Calculation of the SSNR and FSC usually includes the noise in the solvent region surrounding the particle and therefore does not accurately reflect the signal in the particle density itself. Here we show that the SSNR in a reconstructed 3D particle map is linearly proportional to the fractional volume occupied by the particle. Using this relationship, we devise a novel filter (the “single-particle Wiener filter”) to minimize the error in a reconstructed particle map, if the particle volume is known. Moreover, we show how to approximate this filter even when the volume of the particle is not known, by optimizing the signal within a representative interior region of the particle. We show that the new filter improves on previously proposed error-reduction schemes, including the conventional Wiener filter as well as figure-of-merit weighting, and quantify the relationship between all of these methods by theoretical analysis as well as numeric evaluation of both simulated and experimentally collected data. The single-particle Wiener filter is applicable across a broad range of existing 3D reconstruction techniques, but is particularly well suited to the Fourier inversion method, leading to an efficient and accurate implementation. PMID:22613568
Antolak, John A.
2013-01-01
A total skin electron (TSE) floor technique is presented for treating patients who are unable to safely stand for extended durations. A customized flattening filter is used to eliminate the need for field junctioning, improve field uniformity, and reduce setup time. The flattening filter is constructed from copper and polycarbonate, fits into the linac's accessory slot, and is optimized to extend the useful height and width of the beam such that no field junctions are needed during treatment. A TSE floor with flattening filter (TSE FF) treatment course consisted of six patient positions: three supine and three prone. For all treatment fields, electron beam energy was 6 MeV; collimator settings were an x of 30 cm, y of 40 cm, and θcoll of 0°; and a 0.4 cm thick polycarbonate spoiler was positioned in front of the patient. Percent depth dose (PDD) and photon contamination for the TSE FF technique were compared with our standard technique, which is similar to the Stanford technique. Beam profiles were measured using radiochromic film, and dose uniformity was verified using an anthropomorphic radiological phantom. The TSE FF technique met field uniformity requirements specified by the American Association of Physicists in Medicine Task Group 30. TSE FF R80 ranges from 4 to 4.8 mm. TSE FF photon contamination was ~ 3%. Anthropomorphic radiological phantom verification demonstrated that dose to the entire skin surface was expected to be within about ±15% of the prescription dose, except for the perineum, scalp vertex, top of shoulder, and soles of the feet. The TSE floor technique presented herein eliminates field junctioning, is suitable for patients who cannot safely stand during treatment, and provides comparable quality and uniformity to the Stanford technique. PACS number: 87 PMID:24036864
Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian
2018-03-20
The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.
Raman lidar characterization using a reference lamp
NASA Astrophysics Data System (ADS)
Landulfo, Eduardo; da Costa, Renata F.; Rodrigues, Patricia F.; da Silva Lopes, Fábio J.
2014-10-01
The determination of the amount of water vapor in the atmosphere using lidar is a calibration dependent technique. Different collocated instruments are used for this purpose, like radiossoundings and microwave radiometers. When there are no collocated instruments available, an independente lamp mapping calibration technique can be used. Aiming to stabilish an independ technique for the calibration of the six channels Nd-YAG Raman lidar system located at the Center for Lasers and Applications (CLA), S˜ao Paulo, Brazil, an optical characterization of the system was first performed using a reference tungsten lamp. This characterization is useful to identify any possible distortions in the interference filters, telescope mirror and stray light contamination. In this paper we show three lamp mapping caracterizations (01/16/2014, 01/22/2014, 04/09/2014). The first day is used to demostrate how the tecnique is useful to detect stray light, the second one how it is sensible to the position of the filters and the third one demostrates a well optimized optical system.
A consideration on physical tuning for acoustical coloration in recording studio
NASA Astrophysics Data System (ADS)
Shimizu, Yasushi
2003-04-01
Coloration due to particular architectural shapes and dimension or less surface absorption has been mentioned as an acoustical defect in recording studio. Generally interference among early reflected sounds arriving within 10 ms in delay after the direct sound produces coloration by comb filter effect over mid- and high-frequency sounds. In addition, less absorbed room resonance modes also have been well known as a major component for coloration in low-frequency sounds. Small size in dimension with recording studio, however, creates difficulty in characterization associated with wave acoustics behavior, that make acoustical optimization more difficult than that of concert hall acoustics. There still remains difficulty in evaluating amount of coloration as well as predicting its acoustical characteristics in acoustical modeling and in other words acoustical tuning technique during construction is regarded as important to optimize acoustics appropriately to the function of recording studio. This paper presents a example of coloration by comb filtering effect and less damped room modes in typical post-processing recording studio. And acoustical design and measurement technique will be presented for adjusting timbre due to coloration based on psycho-acoustical performance with binaural hearing and room resonance control with line array resonator adjusted to the particular room modes considered.
Improving the Held and Karp Approach with Constraint Programming
NASA Astrophysics Data System (ADS)
Benchimol, Pascal; Régin, Jean-Charles; Rousseau, Louis-Martin; Rueher, Michel; van Hoeve, Willem-Jan
Held and Karp have proposed, in the early 1970s, a relaxation for the Traveling Salesman Problem (TSP) as well as a branch-and-bound procedure that can solve small to modest-size instances to optimality [4, 5]. It has been shown that the Held-Karp relaxation produces very tight bounds in practice, and this relaxation is therefore applied in TSP solvers such as Concorde [1]. In this short paper we show that the Held-Karp approach can benefit from well-known techniques in Constraint Programming (CP) such as domain filtering and constraint propagation. Namely, we show that filtering algorithms developed for the weighted spanning tree constraint [3, 8] can be adapted to the context of the Held and Karp procedure. In addition to the adaptation of existing algorithms, we introduce a special-purpose filtering algorithm based on the underlying mechanisms used in Prim's algorithm [7]. Finally, we explored two different branching schemes to close the integrality gap. Our initial experimental results indicate that the addition of the CP techniques to the Held-Karp method can be very effective.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
Rivolo, Simone; Nagel, Eike; Smith, Nicolas P; Lee, Jack
2014-01-01
Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. The cWIA ability to establish a mechanistic link between coronary haemodynamics measurements and the underlying pathophysiology has been widely demonstrated. Moreover, the prognostic value of a cWIA-derived metric has been recently proved. However, the clinical application of cWIA has been hindered due to the strong dependence on the practitioners, mainly ascribable to the cWIA-derived indices sensitivity to the pre-processing parameters. Specifically, as recently demonstrated, the cWIA-derived metrics are strongly sensitive to the Savitzky-Golay (S-G) filter, typically used to smooth the acquired traces. This is mainly due to the inability of the S-G filter to deal with the different timescale features present in the measured waveforms. Therefore, we propose to apply an adaptive S-G algorithm that automatically selects pointwise the optimal filter parameters. The newly proposed algorithm accuracy is assessed against a cWIA gold standard, provided by a newly developed in-silico cWIA modelling framework, when physiological noise is added to the simulated traces. The adaptive S-G algorithm, when used to automatically select the polynomial degree of the S-G filter, provides satisfactory results with ≤ 10% error for all the metrics through all the levels of noise tested. Therefore, the newly proposed method makes cWIA fully automatic and independent from the practitioners, opening the possibility to multi-centre trials.
Optimizing focal plane electric field estimation for detecting exoplanets
NASA Astrophysics Data System (ADS)
Groff, T.; Kasdin, N. J.; Riggs, A. J. E.
Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.
Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.
Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F
1980-01-01
Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.
Initial experience using the rigid forceps technique to remove wall-embedded IVC filters.
Avery, Allan; Stephens, Maximilian; Redmond, Kendal; Harper, John
2015-06-01
Severely tilted and embedded inferior vena cava (IVC) filters remain the most challenging IVC filters to remove. Heavy endothelialisation over the filter hook can prevent engagement with standard snare and cone recovery techniques. The rigid forceps technique offers a way to dissect the endothelial cap and reliably retrieve severely tilted and embedded filters. By developing this technique, failed IVC retrieval rates can be significantly reduced and the optimum safety profile offered by temporary filters can be achieved. We present our initial experience with the rigid forceps technique described by Stavropoulos et al. for removing wall-embedded IVC filters. We retrospectively reviewed the medical imaging and patient records of all patients who underwent a rigid forceps filter removal over a 22-month period across two tertiary referral institutions. The rigid forceps technique had a success rate of 85% (11/13) for IVC filter removals. All filters in the series showed evidence of filter tilt and embedding of the filter hook into the IVC wall. Average filter tilt from the Z-axis was 19 degrees (range 8-56). Filters observed in the case study were either Bard G2X (n = 6) or Cook Celect (n = 7). Average filter dwell time was 421 days (range 47-1053). There were no major complications observed. The rigid forceps technique can be readily emulated and is a safe and effective technique to remove severely tilted and embedded IVC filters. The development of this technique across both institutions has increased the successful filter removal rate, with perceived benefits to the safety profile of our IVC filter programme. © 2015 The Royal Australian and New Zealand College of Radiologists.
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach
Girrbach, Fabian; Hol, Jeroen D.; Bellusci, Giovanni; Diehl, Moritz
2017-01-01
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. PMID:28534857
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-03-01
The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America
Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin
NASA Astrophysics Data System (ADS)
Otiefy, R. A. H.; Negm, H. M.
2010-12-01
The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach.
Girrbach, Fabian; Hol, Jeroen D; Bellusci, Giovanni; Diehl, Moritz
2017-05-19
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.
NASA Astrophysics Data System (ADS)
He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang
2017-03-01
Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.
Saurí, Josep; Bermel, Wolfgang; Parella, Teodor; Thomas Williamson, R; Martin, Gary E
2018-03-13
1,n-ADEQUATE is a powerful NMR technique for elucidating the structure of proton-deficient small molecules that can help establish the carbon skeleton of a given molecule by providing long-range three-bond 13 C─ 13 C correlations. Care must be taken when using the experiment to identify the simultaneous presence of one-bond 13 C─ 13 C correlations that are not filtered out, unlike the HMBC experiment that has a low-pass J-filter to filter 1 J CH responses out. Dual-optimized, inverted 1 J CC 1,n-ADEQUATE is an improved variant of the experiment that affords broadband inversion of direct responses, obviating the need to take additional steps to identify these correlations. Even though ADEQUATE experiments can now be acquired in a reasonable amount of experimental time if a cryogenic probe is available, low sensitivity is still the main impediment limiting the application of this elegant experiment. Here, we wish to report a further refinement that incorporates real-time bilinear rotation decoupling-based homodecoupling methodology into the dual-optimized, inverted 1 J CC 1,n-ADEQUATE pulse sequence. Improved sensitivity and resolution are achieved by collapsing homonuclear proton-proton couplings from the observed multiplets for most spin systems. The application of the method is illustrated with several model compounds. Copyright © 2018 John Wiley & Sons, Ltd.
Acquisition and visualization techniques for narrow spectral color imaging.
Neumann, László; García, Rafael; Basa, János; Hegedüs, Ramón
2013-06-01
This paper introduces a new approach in narrow-band imaging (NBI). Existing NBI techniques generate images by selecting discrete bands over the full visible spectrum or an even wider spectral range. In contrast, here we perform the sampling with filters covering a tight spectral window. This image acquisition method, named narrow spectral imaging, can be particularly useful when optical information is only available within a narrow spectral window, such as in the case of deep-water transmittance, which constitutes the principal motivation of this work. In this study we demonstrate the potential of the proposed photographic technique on nonunderwater scenes recorded under controlled conditions. To this end three multilayer narrow bandpass filters were employed, which transmit at 440, 456, and 470 nm bluish wavelengths, respectively. Since the differences among the images captured in such a narrow spectral window can be extremely small, both image acquisition and visualization require a novel approach. First, high-bit-depth images were acquired with multilayer narrow-band filters either placed in front of the illumination or mounted on the camera lens. Second, a color-mapping method is proposed, using which the input data can be transformed onto the entire display color gamut with a continuous and perceptually nearly uniform mapping, while ensuring optimally high information content for human perception.
Ahirwal, M K; Kumar, Anil; Singh, G K
2013-01-01
This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Hossain, S; Syzek, E
Purpose: To quantitatively investigate the surface dose deposited in patients imaged with a kV on-board-imager mounted on a radiotherapy machine using different clinical imaging techniques and filters. Methods: A high sensitivity photon diode is used to measure the surface dose on central-axis and at an off-axis-point which is mounted on the top of a phantom setup. The dose is measured for different imaging techniques that include: AP-Pelvis, AP-Head, AP-Abdomen, AP-Thorax, and Extremity. The dose measurements from these imaging techniques are combined with various filtering techniques that include: no-filter (open-field), half-fan bowtie (HF), full-fan bowtie (FF) and Cu-plate filters. The relativemore » surface dose for different imaging and filtering techniques is evaluated quantiatively by the ratio of the dose relative to the Cu-plate filter. Results: The lowest surface dose is deposited with the Cu-plate filter. The highest surface dose deposited results from open fields without filter and it is nearly a factor of 8–30 larger than the corresponding imaging technique with the Cu-plate filter. The AP-Abdomen technique delivers the largest surface dose that is nearly 2.7 times larger than the AP-Head technique. The smallest surface dose is obtained from the Extremity imaging technique. Imaging with bowtie filters decreases the surface dose by nearly 33% in comparison with the open field. The surface doses deposited with the HF or FF-bowtie filters are within few percentages. Image-quality of the radiographic images obtained from the different filtering techniques is similar because the Cu-plate eliminates low-energy photons. The HF- and FF-bowtie filters generate intensity-gradients in the radiographs which affects image-quality in the different imaging technique. Conclusion: Surface dose from kV-imaging decreases significantly with the Cu-plate and bowtie-filters compared to imaging without filters using open-field beams. The use of Cu-plate filter does not affect image-quality and may be used as the default in the different imaging techniques.« less
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...
2014-10-23
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less
Li, Sui-Xian
2018-05-07
Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ₂ norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.
NASA Astrophysics Data System (ADS)
Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko
2015-01-01
Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.
Dolfus, Claire; Piton, Nicolas; Toure, Emmanuel
2015-01-01
Circulating tumor cells (CTCs) arise from primary or secondary tumors and enter the bloodstream by active or passive intravasation. Given the low number of CTCs, enrichment is necessary for detection. Filtration methods are based on selection of CTCs by size using a filter with 6.5 to 8 µm pores. After coloration, collected CTCs are evaluated according to morphological criteria. Immunophenotyping and fluorescence in situ hybridization techniques may be used. Selected CTCs can also be cultivated in vitro to provide more material. Analysis of genomic mutations is difficult because it requires adapted techniques due to limited DNA materials. Filtration-selected CTCs have shown prognostic value in many studies but multicentric validating trials are mandatory to strengthen this assessment. Other clinical applications are promising such as follow-up, therapy response prediction and diagnosis. Microfluidic emerging systems could optimize filtration-selected CTCs by increasing selection accuracy. PMID:26543334
Pratt, C; Shilton, A
2010-01-01
Active filtration, where effluent is passed through a reactive substrate such as steel slag, offers a simple and cost-effective option for removing phosphorus (P) from effluent. This work summarises a series of studies that focused on the world's only full-scale active slag filter operated through to exhaustion. The filter achieved 75% P-removal during its first 5 years, reaching a retention capacity of 1.23 g P/kg slag but then its performance sharply declined. Scanning electron microscopy, X-ray diffraction, X-ray fluorescence, and chemical extractions revealed that P sequestration was primarily achieved via adsorption onto iron (Fe) oxyhydroxides on the slag's surface. It was concluded that batch equilibrium tests, whose use has been repeatedly proposed in the literature, cannot be used as an accurate predictor of filter adsorption capacity because Fe oxyhydroxides form via chemical weathering in the field, and laboratory tests don't account for this. Research into how chemical conditions affect slag's P retention capacity demonstrated that near-neutral pH and high redox are optimal for Fe oxyhydroxide stability and overall filter performance. However, as Fe oxyhydroxide sites fill up, removal capacity becomes exhausted. Attempts to regenerate P removal efficiency using physical techniques proved ineffective contrary to dogma in the literature. Based on the newly-developed understanding of the mechanisms of P removal, chemical regeneration techniques were investigated and were shown to strip large quantities of P from filter adsorption sites leading to a regenerated P removal efficiency. This raises the prospect of developing a breakthrough technology that can repeatedly remove and recover P from effluent.
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1999-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1998-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.
Progress in navigation filter estimate fusion and its application to spacecraft rendezvous
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1994-01-01
A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.
Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, P.; Zbijewski, W.; Gang, G. J.
2011-10-15
Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Analysis of filter tuning techniques for sequential orbit determination
NASA Technical Reports Server (NTRS)
Lee, T.; Yee, C.; Oza, D.
1995-01-01
This paper examines filter tuning techniques for a sequential orbit determination (OD) covariance analysis. Recently, there has been a renewed interest in sequential OD, primarily due to the successful flight qualification of the Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) using Doppler data extracted onboard the Extreme Ultraviolet Explorer (EUVE) spacecraft. TONS computes highly accurate orbit solutions onboard the spacecraft in realtime using a sequential filter. As the result of the successful TONS-EUVE flight qualification experiment, the Earth Observing System (EOS) AM-1 Project has selected TONS as the prime navigation system. In addition, sequential OD methods can be used successfully for ground OD. Whether data are processed onboard or on the ground, a sequential OD procedure is generally favored over a batch technique when a realtime automated OD system is desired. Recently, OD covariance analyses were performed for the TONS-EUVE and TONS-EOS missions using the sequential processing options of the Orbit Determination Error Analysis System (ODEAS). ODEAS is the primary covariance analysis system used by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The results of these analyses revealed a high sensitivity of the OD solutions to the state process noise filter tuning parameters. The covariance analysis results show that the state estimate error contributions from measurement-related error sources, especially those due to the random noise and satellite-to-satellite ionospheric refraction correction errors, increase rapidly as the state process noise increases. These results prompted an in-depth investigation of the role of the filter tuning parameters in sequential OD covariance analysis. This paper analyzes how the spacecraft state estimate errors due to dynamic and measurement-related error sources are affected by the process noise level used. This information is then used to establish guidelines for determining optimal filter tuning parameters in a given sequential OD scenario for both covariance analysis and actual OD. Comparisons are also made with corresponding definitive OD results available from the TONS-EUVE analysis.
Grayscale Optical Correlator Workbench
NASA Technical Reports Server (NTRS)
Hanan, Jay; Zhou, Hanying; Chao, Tien-Hsin
2006-01-01
Grayscale Optical Correlator Workbench (GOCWB) is a computer program for use in automatic target recognition (ATR). GOCWB performs ATR with an accurate simulation of a hardware grayscale optical correlator (GOC). This simulation is performed to test filters that are created in GOCWB. Thus, GOCWB can be used as a stand-alone ATR software tool or in combination with GOC hardware for building (target training), testing, and optimization of filters. The software is divided into three main parts, denoted filter, testing, and training. The training part is used for assembling training images as input to a filter. The filter part is used for combining training images into a filter and optimizing that filter. The testing part is used for testing new filters and for general simulation of GOC output. The current version of GOCWB relies on the mathematical software tools from MATLAB binaries for performing matrix operations and fast Fourier transforms. Optimization of filters is based on an algorithm, known as OT-MACH, in which variables specified by the user are parameterized and the best filter is selected on the basis of an average result for correct identification of targets in multiple test images.
NASA Astrophysics Data System (ADS)
Weng, Yi; Wang, Junyi; He, Xuan; Pan, Zhongqi
2018-02-01
The Nyquist spectral shaping techniques facilitate a promising solution to enhance spectral efficiency (SE) and further reduce the cost-per-bit in high-speed wavelength-division multiplexing (WDM) transmission systems. Hypothetically, any Nyquist WDM signals with arbitrary shapes can be generated by the use of the digital signal processing (DSP) based electrical filters (E-filter). Nonetheless, in actual 100G/ 200G coherent systems, the performance as well as DSP complexity are increasingly restricted by cost and power consumption. Henceforward it is indispensable to optimize DSP to accomplish the preferred performance at the least complexity. In this paper, we systematically investigated the minimum requirements and challenges of Nyquist WDM signal generation, particularly for higher-order modulation formats, including 16 quadrature amplitude modulation (QAM) or 64QAM. A variety of interrelated parameters, such as channel spacing and roll-off factor, have been evaluated to optimize the requirements of the digital-to-analog converter (DAC) resolution and transmitter E-filter bandwidth. The impact of spectral pre-emphasis has been predominantly enhanced via the proposed interleaved DAC architecture by at least 4%, and hence reducing the required optical signal to noise ratio (OSNR) at a bit error rate (BER) of 10-3 by over 0.45 dB at a channel spacing of 1.05 symbol rate and an optimized roll-off factor of 0.1. Furthermore, the requirements of sampling rate for different types of super-Gaussian E-filters are discussed for 64QAM Nyquist WDM transmission systems. Finally, the impact of the non-50% duty cycle error between sub-DACs upon the quality of the generated signals for the interleaved DAC structure has been analyzed.
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
Design optimization of a prescribed vibration system using conjoint value analysis
NASA Astrophysics Data System (ADS)
Malinga, Bongani; Buckner, Gregory D.
2016-12-01
This article details a novel design optimization strategy for a prescribed vibration system (PVS) used to mechanically filter solids from fluids in oil and gas drilling operations. A dynamic model of the PVS is developed, and the effects of disturbance torques are detailed. This model is used to predict the effects of design parameters on system performance and efficiency, as quantified by system attributes. Conjoint value analysis, a statistical technique commonly used in marketing science, is utilized to incorporate designer preferences. This approach effectively quantifies and optimizes preference-based trade-offs in the design process. The effects of designer preferences on system performance and efficiency are simulated. This novel optimization strategy yields improvements in all system attributes across all simulated vibration profiles, and is applicable to other industrial electromechanical systems.
Initial Ares I Bending Filter Design
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark
2007-01-01
The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.
Assessing mass change trends in GRACE models
NASA Astrophysics Data System (ADS)
Siemes, C.; Liu, X.; Ditmar, P.; Revtova, E.; Slobbe, C.; Klees, R.; Zhao, Q.
2009-04-01
The DEOS Mass Transport model, release 1 (DMT-1), has been recently presented to the scientific community. The model is based on GRACE data and consists of sets of spherical harmonic coefficients to degree 120, which are estimated once per month. Currently, the DMT-1 model covers the time span from Feb. 2003 to Dec. 2006. The high spatial resolution of the model could be achieved by applying a statistically optimal Wiener-type filter, which is superior to standard filtering techniques. The optimal Wiener-type filter is a regularization-type filter which makes full use of the variance/covariance matrices of the sets of spherical harmonic coefficients. It can be shown that applying this filter is equivalent to introducing an additional set of observations: Each set of spherical harmonic coefficients is assumed to be zero. The variance/covariance matrix of this information is chosen according to the signal contained within the sets of spherical harmonic coefficients, expressed in terms of equivalent water layer thickness in the spatial domain, with respect to its variations in time. It will be demonstrated that DMT-1 provides a much better localization and more realistic amplitudes than alternative filtered models. In particular, we will consider a lower maximum degree of the spherical harmonic expansion (e.g. 70), as well as standard filters like an isotropic Gaussian filter. For the sake of a fair comparison, we will use the same GRACE observations as well as the same method for the inversion of the observations to obtain the alternative filtered models. For the inversion method, we will choose the three-point range combination approach. Thus, we will compare four different models: (1) GRACE solution with maximum degree 120, filtered by optimal Wiener-type filter (the DMT-1 model) (2) GRACE solution with maximum degree 120, filtered by standard filter (3) GRACE solution with maximum degree 70, filtered by optimal Wiener-type filter (4) GRACE solution with maximum degree 70, filtered by standard filter Within the comparison, we will focus on the amplitude of long-term mass change signals with respect to spatial resolution. The challenge for the recovery of such signals from GRACE based solutions results from the fact that the solutions must be filtered and that filtering of always smoothes not only noise, but also to some extend signal. Since the observation density is much higher near the poles than at the equator, which is due to the orbits of the GRACE satellites, we expect that the magnitude of estimated mass change signals in polar areas is less underestimated than in equatorial areas. For this reason will investigate trends at locations in equatorial areas as well as trends at locations in polar areas. In particular, we will investigate Lake Victoria, Lake Malawi and Lake Tanganyika, which are all located in Eastern Africa, near to the equator. Furthermore, we will show trends of two locations at the South-East coast of Greenland, Abbot Ice-Shelf and Marie-Byrd-Land in Antarctica For validation, we use water level variations in Lake Victoria (69000 km2), Lake Malawi (29000 km2) and Lake Tanganyika (33000 km2) as ground truth. The water level, which is measured by satellite radar altimetry, decreases at a rate of approximately 47 cm in Lake Victoria, 42 cm in Lake Malawi and 30 cm in Lake Tanganyika over the period from Feb. 2003 to Dec. 2006. Because all three lakes are located in tropical and subtropical clime, the mass change signal will consist of large seasonal variations in addition to the trend component we are interested in. However, also the amplitude of estimated seasonal variations can be used as an indicator of the quality of the models within the comparison. Since the lakes' areas are at the edge of the spatial resolution GRACE data can provide, they are a good example of the advantages of high-resolution mass change models like DMT-1.
Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; McLauchlan, Lifford
2010-08-01
In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.
Silicon Micromachining for Terahertz Component Development
NASA Technical Reports Server (NTRS)
Chattopadhyay, Goutam; Reck, Theodore J.; Jung-Kubiak, Cecile; Siles, Jose V.; Lee, Choonsup; Lin, Robert; Mehdi, Imran
2013-01-01
Waveguide component technology at terahertz frequencies has come of age in recent years. Essential components such as ortho-mode transducers (OMT), quadrature hybrids, filters, and others for high performance system development were either impossible to build or too difficult to fabricate with traditional machining techniques. With micromachining of silicon wafers coated with sputtered gold it is now possible to fabricate and test these waveguide components. Using a highly optimized Deep Reactive Ion Etching (DRIE) process, we are now able to fabricate silicon micromachined waveguide structures working beyond 1 THz. In this paper, we describe in detail our approach of design, fabrication, and measurement of silicon micromachined waveguide components and report the results of a 1 THz canonical E-plane filter.
NASA Astrophysics Data System (ADS)
Woellmer, Wolfgang; Meder, Tom; Jappe, Uta; Gross, Gerd; Riethdorf, Sabine; Riethdorf, Lutz; Kuhler-Obbarius, Christina; Loening, Thomas
1996-01-01
For the investigation of laser plume for the existence of HPV DNA fragments, which possibly occur during laser treatment of virus infected tissue, human papillomas and condylomas were treated in vitro with the CO2-laser. For the sampling of the laser plume a new method for the trapping of the material was developed by use of water-soluble gelatine filters. These samples were analyzed with the polymerase chain reaction (PCR) technique, which was optimized in regard of the gelatine filters and the specific primers. Positive PCR results for HPV DNA fragments up to the size of a complete oncogene were obtained and are discussed regarding infectiousity.
Visual environment recognition for robot path planning using template matched filters
NASA Astrophysics Data System (ADS)
Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto
2017-08-01
A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.
Oßmann, Barbara E; Sarau, George; Schmitt, Sebastian W; Holtmannspötter, Heinrich; Christiansen, Silke H; Dicke, Wilhelm
2017-06-01
When analysing microplastics in food, due to toxicological reasons it is important to achieve clear identification of particles down to a size of at least 1 μm. One reliable, optical analytical technique allowing this is micro-Raman spectroscopy. After isolation of particles via filtration, analysis is typically performed directly on the filter surface. In order to obtain high qualitative Raman spectra, the material of the membrane filters should not show any interference in terms of background and Raman signals during spectrum acquisition. To facilitate the usage of automatic particle detection, membrane filters should also show specific optical properties. In this work, beside eight different, commercially available membrane filters, three newly designed metal-coated polycarbonate membrane filters were tested to fulfil these requirements. We found that aluminium-coated polycarbonate membrane filters had ideal characteristics as a substrate for micro-Raman spectroscopy. Its spectrum shows no or minimal interference with particle spectra, depending on the laser wavelength. Furthermore, automatic particle detection can be applied when analysing the filter surface under dark-field illumination. With this new membrane filter, analytics free of interference of microplastics down to a size of 1 μm becomes possible. Thus, an important size class of these contaminants can now be visualized and spectrally identified. Graphical abstract A newly developed aluminium coated polycarbonate membrane filter enables automatic particle detection and generation of high qualitative Raman spectra allowing identification of small microplastics.
Digital Filtering of Three-Dimensional Lower Extremity Kinematics: an Assessment
Sinclair, Jonathan; Taylor, Paul John; Hobbs, Sarah Jane
2013-01-01
Errors in kinematic data are referred to as noise and are an undesirable portion of any waveform. Noise is typically removed using a low-pass filter which removes the high frequency components of the signal. The selection of an optimal frequency cut-off is very important when processing kinematic information and a number of techniques exists for the determination of an optimal frequency cut-off. Despite the importance of cut-off frequency to the efficacy of kinematic analyses there is currently a paucity of research examining the influence of different cut-off frequencies on the resultant 3-D kinematic waveforms and discrete parameters. Twenty participants ran at 4.0 m•s−1 as lower extremity kinematics in the sagittal, coronal and transverse planes were measured using an eight camera motion analysis system. The data were filtered at a range of cut-off frequencies and the discrete kinematic parameters were examined using repeated measures ANOVA’s. The similarity between the raw and filtered waveforms were examined using intra-class correlations. The results show that the cut-off frequency has a significant influence on the discrete kinematic measure across displacement and derivative information in all three planes of rotation. Furthermore, it was also revealed that as the cut-off frequency decreased the attenuation of the kinematic waveforms became more pronounced, particularly in the coronal and transverse planes at the second derivative. In conclusion, this investigation provides new information regarding the influence of digital filtering on lower extremity kinematics and re-emphasizes the importance of selecting the correct cut-off frequency. PMID:24511338
Kalman Filter Tracking on Parallel Architectures
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2015-12-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter [2]. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust and are exactly those being used today for the design of the tracking system for HL-LHC. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedup both with Intel Xeon and Xeon Phi. We report here our further progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic simulation setup.
Software Would Largely Automate Design of Kalman Filter
NASA Technical Reports Server (NTRS)
Chuang, Jason C. H.; Negast, William J.
2005-01-01
Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
Energy from vascular plant wastewater treatment systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolverton, B.C.; McDonald, R.C.
1981-04-01
Water hyacinth (Eichhornia crassipes) duckweed (Spirodela sp. and Lemna sp.), water pennywort (Hydrocotyle ranunculoides), and kudzu (Pueraria lobata) were anaerobically fermented using an anaerobic filter technique that reduced the total digestion time from 90 d to an average of 23 d and produced 0.14 to 0.22 m/sup 3/ CH/sub 4//kg (dry weight) (2.3 to 3.6 ft/sup 3//lb) from mature filters for the 3 aquatic species. Kudzu required an average digestion time of 33 d and produced an average of 0.21 m/sup 3/ CH/sub 4//kg (dry weight) (3.4 ft/sup 3//lb). The anaerobic filter provided a large surface area for the anaerobicmore » bacteria to establish and maintain an optimal balance of facultative, acid-forming, and methane-producing bacteria. Consequently the efficiency of the process was greatly improved over prior batch fermentations.« less
Research on a Lamb Wave and Particle Filter-Based On-Line Crack Propagation Prognosis Method.
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Cai, Jian; Yang, Weibo
2016-03-03
Prognostics and health management techniques have drawn widespread attention due to their ability to facilitate maintenance activities based on need. On-line prognosis of fatigue crack propagation can offer information for optimizing operation and maintenance strategies in real-time. This paper proposes a Lamb wave-particle filter (LW-PF)-based method for on-line prognosis of fatigue crack propagation which takes advantages of the possibility of on-line monitoring to evaluate the actual crack length and uses a particle filter to deal with the crack evolution and monitoring uncertainties. The piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The state space model relating to crack propagation is established by the data-driven and finite element methods. Fatigue experiments performed on hole-edge crack specimens have validated the advantages of the proposed method.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Hosseini, Zahra; Liu, Junmin; Solovey, Igor; Menon, Ravi S; Drangova, Maria
2017-04-01
To implement and optimize a new approach for susceptibility-weighted image (SWI) generation from multi-echo multi-channel image data and compare its performance against optimized traditional SWI pipelines. Five healthy volunteers were imaged at 7 Tesla. The inter-echo-variance (IEV) channel combination, which uses the variance of the local frequency shift at multiple echo times as a weighting factor during channel combination, was used to calculate multi-echo local phase shift maps. Linear phase masks were combined with the magnitude to generate IEV-SWI. The performance of the IEV-SWI pipeline was compared with that of two accepted SWI pipelines-channel combination followed by (i) Homodyne filtering (HPH-SWI) and (ii) unwrapping and high-pass filtering (SVD-SWI). The filtering steps of each pipeline were optimized. Contrast-to-noise ratio was used as the comparison metric. Qualitative assessment of artifact and vessel conspicuity was performed and processing time of pipelines was evaluated. The optimized IEV-SWI pipeline (σ = 7 mm) resulted in continuous vessel visibility throughout the brain. IEV-SWI had significantly higher contrast compared with HPH-SWI and SVD-SWI (P < 0.001, Friedman nonparametric test). Residual background fields and phase wraps in HPH-SWI and SVD-SWI corrupted the vessel signal and/or generated vessel-mimicking artifact. Optimized implementation of the IEV-SWI pipeline processed a six-echo 16-channel dataset in under 10 min. IEV-SWI benefits from channel-by-channel processing of phase data and results in high contrast images with an optimal balance between contrast and background noise removal, thereby presenting evidence of importance of the order in which postprocessing techniques are applied for multi-channel SWI generation. 2 J. Magn. Reson. Imaging 2017;45:1113-1124. © 2016 International Society for Magnetic Resonance in Medicine.
Optimal frequency domain textural edge detection filter
NASA Technical Reports Server (NTRS)
Townsend, J. K.; Shanmugan, K. S.; Frost, V. S.
1985-01-01
An optimal frequency domain textural edge detection filter is developed and its performance evaluated. For the given model and filter bandwidth, the filter maximizes the amount of output image energy placed within a specified resolution interval centered on the textural edge. Filter derivation is based on relating textural edge detection to tonal edge detection via the complex low-pass equivalent representation of narrowband bandpass signals and systems. The filter is specified in terms of the prolate spheroidal wave functions translated in frequency. Performance is evaluated using the asymptotic approximation version of the filter. This evaluation demonstrates satisfactory filter performance for ideal and nonideal textures. In addition, the filter can be adjusted to detect textural edges in noisy images at the expense of edge resolution.
Issues in the digital implementation of control compensators. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Moroney, P.
1979-01-01
Techniques developed for the finite-precision implementation of digital filters were used, adapted, and extended for digital feedback compensators, with particular emphasis on steady state, linear-quadratic-Gaussian compensators. Topics covered include: (1) the linear-quadratic-Gaussian problem; (2) compensator structures; (3) architectural issues: serialism, parallelism, and pipelining; (4) finite wordlength effects: quantization noise, quantizing the coefficients, and limit cycles; and (5) the optimization of structures.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
This fourth monthly progress report again contains corrections and additions to the previously submitted reports. The additions include a simplified SRB model that is directly incorporated into the estimation algorithm and provides the required partial derivatives. The resulting partial derivatives are analytical rather than numerical as would be the case using the SOBER routines. The filter and smoother routine developments have continued. These routines are being checked out.
Effective Network Management via System-Wide Coordination and Optimization
2010-08-01
Srinath Sridhar, Matthew Streeter, Jimeng Sun, Michael Tschantz, Rangarajan Vasudevan, Vijay Vasude- van, Gaurav Veda, Shobha Venkataraman, Justin... Sharma and Byers [150] suggest the use of Bloom filters. While minimizing redundant measurements is a common high-level theme between cSamp and their...NSDI, 2004. [150] M. R. Sharma and J. W. Byers. Scalable Coordination Techniques for Distributed Network Monitoring. In Proc. of PAM, 2005. [151] S
Regularized Filters for L1-Norm-Based Common Spatial Patterns.
Wang, Haixian; Li, Xiaomeng
2016-02-01
The l1 -norm-based common spatial patterns (CSP-L1) approach is a recently developed technique for optimizing spatial filters in the field of electroencephalogram (EEG)-based brain computer interfaces. The l1 -norm-based expression of dispersion in CSP-L1 alleviates the negative impact of outliers. In this paper, we further improve the robustness of CSP-L1 by taking into account noise which does not necessarily have as large a deviation as with outliers. The noise modelling is formulated by using the waveform length of the EEG time course. With the noise modelling, we then regularize the objective function of CSP-L1, in which the l1-norm is used in two folds: one is the dispersion and the other is the waveform length. An iterative algorithm is designed to resolve the optimization problem of the regularized objective function. A toy illustration and the experiments of classification on real EEG data sets show the effectiveness of the proposed method.
Inferior vena cava filter retrievals, standard and novel techniques.
Kuyumcu, Gokhan; Walker, T Gregory
2016-12-01
The placement of an inferior vena cava (IVC) filter is a well-established management strategy for patients with venous thromboembolism (VTE) disease in whom anticoagulant therapy is either contraindicated or has failed. IVC filters may also be placed for VTE prophylaxis in certain circumstances. There has been a tremendous growth in placement of retrievable IVC filters in the past decade yet the majority of the devices are not removed. Unretrieved IVC filters have several well-known complications that increase in frequency as the filter dwell time increases. These complications include caval wall penetration, filter fracture or migration, caval thrombosis and an increased risk for lower extremity deep vein thrombosis (DVT). Difficulty is sometimes encountered when attempting to retrieve indwelling filters, mainly because of either abnormal filter positioning or endothelization of filter components that are in contact with the IVC wall, thereby causing the filter to become embedded. The length of time that a filter remains indwelling also impacts the retrieval rate, as increased dwell times are associated with more difficult retrievals. Several techniques for difficult retrievals have been described in the medical literature. These techniques range from modifications of standard retrieval techniques to much more complex interventions. Complications related to complex retrievals are more common than those associated with standard retrieval techniques. The risks of complex filter retrievals should be compared with those of life-long anticoagulation associated with an unretrieved filter, and should be individualized. This article summarizes current techniques for IVC filter retrieval from a clinical point of view, with an emphasis on advanced retrieval techniques.
Inferior vena cava filter retrievals, standard and novel techniques
Walker, T. Gregory
2016-01-01
The placement of an inferior vena cava (IVC) filter is a well-established management strategy for patients with venous thromboembolism (VTE) disease in whom anticoagulant therapy is either contraindicated or has failed. IVC filters may also be placed for VTE prophylaxis in certain circumstances. There has been a tremendous growth in placement of retrievable IVC filters in the past decade yet the majority of the devices are not removed. Unretrieved IVC filters have several well-known complications that increase in frequency as the filter dwell time increases. These complications include caval wall penetration, filter fracture or migration, caval thrombosis and an increased risk for lower extremity deep vein thrombosis (DVT). Difficulty is sometimes encountered when attempting to retrieve indwelling filters, mainly because of either abnormal filter positioning or endothelization of filter components that are in contact with the IVC wall, thereby causing the filter to become embedded. The length of time that a filter remains indwelling also impacts the retrieval rate, as increased dwell times are associated with more difficult retrievals. Several techniques for difficult retrievals have been described in the medical literature. These techniques range from modifications of standard retrieval techniques to much more complex interventions. Complications related to complex retrievals are more common than those associated with standard retrieval techniques. The risks of complex filter retrievals should be compared with those of life-long anticoagulation associated with an unretrieved filter, and should be individualized. This article summarizes current techniques for IVC filter retrieval from a clinical point of view, with an emphasis on advanced retrieval techniques. PMID:28123984
Klandima, Somphan; Kruatrachue, Anchalee; Wongtapradit, Lawan; Nithipanya, Narong; Ratanaprakarn, Warangkana
2014-06-01
The problem of image quality in a large number of upper airway obstructed patients is the superimposition of the airway over the bone of the spine on the AP view. This problem was resolved by increasing KVp to high KVp technique and adding extra radiographic filters (copper filter) to reduce the sharpness of the bone and increase the clarity of the airway. However, this raises a concern that patients might be receiving an unnecessarily higher dose of radiation, as well as the effectiveness of the invented filter compared to the traditional filter. To evaluate the level of radiation dose that patients receive with the use of multi-layer filter compared to non-filter and to evaluate the image quality of the upper airways between using the radiographic filter (multi-layer filter) and the traditional filter (copperfilter). The attenuation curve of both filter materials was first identified. Then, both the filters were tested with Alderson Rando phantom to determine the appropriate exposure. Using the method described, a new type of filter called the multi-layer filter for imaging patients was developed. A randomized control trial was then performed to compare the effectiveness of the newly developed multi-layer filter to the copper filter. The research was conducted in patients with upper airway obstruction treated at Queen Sirikit National Institute of Child Health from October 2006 to September 2007. A total of 132 patients were divided into two groups. The experimental group used high kVp technique with multi-layer filter, while the control group used copper filter. A comparison of film interpretation between the multi-layer filter and the copper filter was made by a number of radiologists who were blinded to both to the technique and type of filter used. Patients had less radiation from undergoing the kVp technique with copper filter and multi-layer filter compared to the conventional technique, where no filter is used. Patients received approximately 65.5% less radiation dose using high kVp technique with multi-layer filter compared to the conventional technique, and 25.9% less than using the traditional copper filter 45% of the radiologists who participated in this study reported that the high kVp technique with multi-layer filter was better for diagnosing stenosis, or narrowing of the upper airways. 33% reported that, both techniques were equal, while 22% reported that the traditional copper filter allowed for better details of airway obstruction. These findings showed that the multi-layered filter was comparable to the copper filter in terms of film interpretation. Using the multi-layer filter resulted in patients receiving a lower dose of radiation, as well as similar film interpretation when compared to the traditional copper filter.
NASA Astrophysics Data System (ADS)
Bhardwaj, Kaushal; Patra, Swarnajyoti
2018-04-01
Inclusion of spatial information along with spectral features play a significant role in classification of remote sensing images. Attribute profiles have already proved their ability to represent spatial information. In order to incorporate proper spatial information, multiple attributes are required and for each attribute large profiles need to be constructed by varying the filter parameter values within a wide range. Thus, the constructed profiles that represent spectral-spatial information of an hyperspectral image have huge dimension which leads to Hughes phenomenon and increases computational burden. To mitigate these problems, this work presents an unsupervised feature selection technique that selects a subset of filtered image from the constructed high dimensional multi-attribute profile which are sufficiently informative to discriminate well among classes. In this regard the proposed technique exploits genetic algorithms (GAs). The fitness function of GAs are defined in an unsupervised way with the help of mutual information. The effectiveness of the proposed technique is assessed using one-against-all support vector machine classifier. The experiments conducted on three hyperspectral data sets show the robustness of the proposed method in terms of computation time and classification accuracy.
NASA Astrophysics Data System (ADS)
Tian, Yunfeng; Shen, Zheng-Kang
2016-02-01
We develop a spatial filtering method to remove random noise and extract the spatially correlated transients (i.e., common-mode component (CMC)) that deviate from zero mean over the span of detrended position time series of a continuous Global Positioning System (CGPS) network. The technique utilizes a weighting scheme that incorporates two factors—distances between neighboring sites and their correlations of long-term residual position time series. We use a grid search algorithm to find the optimal thresholds for deriving the CMC that minimizes the root-mean-square (RMS) of the filtered residual position time series. Comparing to the principal component analysis technique, our method achieves better (>13% on average) reduction of residual position scatters for the CGPS stations in western North America, eliminating regional transients of all spatial scales. It also has advantages in data manipulation: less intervention and applicable to a dense network of any spatial extent. Our method can also be used to detect CMC irrespective of its origins (i.e., tectonic or nontectonic), if such signals are of particular interests for further study. By varying the filtering distance range, the long-range CMC related to atmospheric disturbance can be filtered out, uncovering CMC associated with transient tectonic deformation. A correlation-based clustering algorithm is adopted to identify stations cluster that share the common regional transient characteristics.
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Approximation, abstraction and decomposition in search and optimization
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1992-01-01
In this paper, I discuss four different areas of my research. One portion of my research has focused on automatic synthesis of search control heuristics for constraint satisfaction problems (CSPs). I have developed techniques for automatically synthesizing two types of heuristics for CSPs: Filtering functions are used to remove portions of a search space from consideration. Another portion of my research is focused on automatic synthesis of hierarchic algorithms for solving constraint satisfaction problems (CSPs). I have developed a technique for constructing hierarchic problem solvers based on numeric interval algebra. Another portion of my research is focused on automatic decomposition of design optimization problems. We are using the design of racing yacht hulls as a testbed domain for this research. Decomposition is especially important in the design of complex physical shapes such as yacht hulls. Another portion of my research is focused on intelligent model selection in design optimization. The model selection problem results from the difficulty of using exact models to analyze the performance of candidate designs.
A Novel Technique for Inferior Vena Cava Filter Extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Edward William, E-mail: ed.johnston@doctors.org.uk; Rowe, Luke Michael Morgan; Brookes, Jocelyn
Inferior vena cava (IVC) filters are used to protect against pulmonary embolism in high-risk patients. Whilst the insertion of retrievable IVC filters is gaining popularity, a proportion of such devices cannot be removed using standard techniques. We describe a novel approach for IVC filter removal that involves snaring the filter superiorly along with the use of flexible forceps or laser devices to dissect the filter struts from the caval wall. This technique has used to successfully treat three patients without complications in whom standard techniques failed.
Study of information transfer optimization for communication satellites
NASA Technical Reports Server (NTRS)
Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.
1973-01-01
The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.
Sim, K S; Kiani, M A; Nia, M E; Tso, C P
2014-01-01
A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.
Joint Transmit and Receive Filter Optimization for Sub-Nyquist Delay-Doppler Estimation
NASA Astrophysics Data System (ADS)
Lenz, Andreas; Stein, Manuel S.; Swindlehurst, A. Lee
2018-05-01
In this article, a framework is presented for the joint optimization of the analog transmit and receive filter with respect to a parameter estimation problem. At the receiver, conventional signal processing systems restrict the two-sided bandwidth of the analog pre-filter $B$ to the rate of the analog-to-digital converter $f_s$ to comply with the well-known Nyquist-Shannon sampling theorem. In contrast, here we consider a transceiver that by design violates the common paradigm $B\\leq f_s$. To this end, at the receiver, we allow for a higher pre-filter bandwidth $B>f_s$ and study the achievable parameter estimation accuracy under a fixed sampling rate when the transmit and receive filter are jointly optimized with respect to the Bayesian Cram\\'{e}r-Rao lower bound. For the case of delay-Doppler estimation, we propose to approximate the required Fisher information matrix and solve the transceiver design problem by an alternating optimization algorithm. The presented approach allows us to explore the Pareto-optimal region spanned by transmit and receive filters which are favorable under a weighted mean squared error criterion. We also discuss the computational complexity of the obtained transceiver design by visualizing the resulting ambiguity function. Finally, we verify the performance of the optimized designs by Monte-Carlo simulations of a likelihood-based estimator.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Application of filtering techniques in preprocessing magnetic data
NASA Astrophysics Data System (ADS)
Liu, Haijun; Yi, Yongping; Yang, Hongxia; Hu, Guochuang; Liu, Guoming
2010-08-01
High precision magnetic exploration is a popular geophysical technique for its simplicity and its effectiveness. The explanation in high precision magnetic exploration is always a difficulty because of the existence of noise and disturbance factors, so it is necessary to find an effective preprocessing method to get rid of the affection of interference factors before further processing. The common way to do this work is by filtering. There are many kinds of filtering methods. In this paper we introduced in detail three popular kinds of filtering techniques including regularized filtering technique, sliding averages filtering technique, compensation smoothing filtering technique. Then we designed the work flow of filtering program based on these techniques and realized it with the help of DELPHI. To check it we applied it to preprocess magnetic data of a certain place in China. Comparing the initial contour map with the filtered contour map, we can see clearly the perfect effect our program. The contour map processed by our program is very smooth and the high frequency parts of data are disappeared. After filtering, we separated useful signals and noisy signals, minor anomaly and major anomaly, local anomaly and regional anomaly. It made us easily to focus on the useful information. Our program can be used to preprocess magnetic data. The results showed the effectiveness of our program.
Paramo, Erica; Palmero, Susana; Heras, Aranzazu; Colina, Alvaro
2018-02-01
A novel methodology to prepare sensors based on carbon nanostructures electrodes modified by metal nanoparticles is proposed. As a proof of concept, a novel bismuth nanoparticle/carbon nanofiber (Bi-NPs/CNF) electrode and a carbon nanotube (CNT)/gold nanoparticle (Au-NPs) have been developed. Bi-NPs/CNF films were prepared by 1) filtering a dispersion of CNFs on a polytetrafluorethylene (PTFE) filter, and 2) filtering a dispersion of Bi-NPs chemically synthesized through this CNF/PTFE film. Next the electrode is prepared by sticking the Bi-NPs/CNF/PTFE film on a PET substrate. In this work, Bi-NPs/CNF ratio was optimized using a Cd 2+ solution as a probe sample. The Cd anodic stripping peak intensity, registered by differential pulse anodic stripping voltammetry (DPASV), is selected as target signal. The voltammograms registered for Cd stripping with this Bi-NPs/CNF/PTFE electrode showed well-defined and highly reproducible electrochemical. The optimized Bi-NPs/CNF electrode exhibits a Cd 2+ detection limit of 53.57 ppb. To demonstrate the utility and versatility of this methodology, single walled carbon nanotubes (SWCNTs) and gold nanoparticles (Au-NPs) were selected to prepare a completely different electrode. Thus, the new Au-NPs/SWCNT/PTFE electrode was tested with a multiresponse technique. In this case, UV/Vis absorption spectroelectrochemistry experiments were carried out for studying dopamine, demonstrating the good performance of the Au-NPs/SWCNT electrode developed. Copyright © 2017 Elsevier B.V. All rights reserved.
Robust Controller for Turbulent and Convective Boundary Layers
2006-08-01
filter and an optimal regulator. The Kalman filter equation and the optimal regulator equation corresponding to the state-space equations, (2.20), are...separate steady-state algebraic Riccati equations. The Kalman filter is used here as a state observer rather than as an estimator since no noises are...2001) which will not be repeated here. For robustness, in the design, the Kalman filter input matrix G has been set equal to the control input
NASA Astrophysics Data System (ADS)
Kim, Jae Wook
2013-05-01
This paper proposes a novel systematic approach for the parallelization of pentadiagonal compact finite-difference schemes and filters based on domain decomposition. The proposed approach allows a pentadiagonal banded matrix system to be split into quasi-disjoint subsystems by using a linear-algebraic transformation technique. As a result the inversion of pentadiagonal matrices can be implemented within each subdomain in an independent manner subject to a conventional halo-exchange process. The proposed matrix transformation leads to new subdomain boundary (SB) compact schemes and filters that require three halo terms to exchange with neighboring subdomains. The internode communication overhead in the present approach is equivalent to that of standard explicit schemes and filters based on seven-point discretization stencils. The new SB compact schemes and filters demand additional arithmetic operations compared to the original serial ones. However, it is shown that the additional cost becomes sufficiently low by choosing optimal sizes of their discretization stencils. Compared to earlier published results, the proposed SB compact schemes and filters successfully reduce parallelization artifacts arising from subdomain boundaries to a level sufficiently negligible for sophisticated aeroacoustic simulations without degrading parallel efficiency. The overall performance and parallel efficiency of the proposed approach are demonstrated by stringent benchmark tests.
Detection of movement intention from single-trial movement-related cortical potentials
NASA Astrophysics Data System (ADS)
Niazi, Imran Khan; Jiang, Ning; Tiberghien, Olivier; Feldbæk Nielsen, Jørgen; Dremstrup, Kim; Farina, Dario
2011-10-01
Detection of movement intention from neural signals combined with assistive technologies may be used for effective neurofeedback in rehabilitation. In order to promote plasticity, a causal relation between intended actions (detected for example from the EEG) and the corresponding feedback should be established. This requires reliable detection of motor intentions. In this study, we propose a method to detect movements from EEG with limited latency. In a self-paced asynchronous BCI paradigm, the initial negative phase of the movement-related cortical potentials (MRCPs), extracted from multi-channel scalp EEG was used to detect motor execution/imagination in healthy subjects and stroke patients. For MRCP detection, it was demonstrated that a new optimized spatial filtering technique led to better accuracy than a large Laplacian spatial filter and common spatial pattern. With the optimized spatial filter, the true positive rate (TPR) for detection of movement execution in healthy subjects (n = 15) was 82.5 ± 7.8%, with latency of -66.6 ± 121 ms. Although TPR decreased with motor imagination in healthy subject (n = 10, 64.5 ± 5.33%) and with attempted movements in stroke patients (n = 5, 55.01 ± 12.01%), the results are promising for the application of this approach to provide patient-driven real-time neurofeedback.
NASA Technical Reports Server (NTRS)
Downie, John D.
1995-01-01
Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.
New estimation architecture for multisensor data fusion
NASA Astrophysics Data System (ADS)
Covino, Joseph M.; Griffiths, Barry E.
1991-07-01
This paper describes a novel method of hierarchical asynchronous distributed filtering called the Net Information Approach (NIA). The NIA is a Kalman-filter-based estimation scheme for spatially distributed sensors which must retain their local optimality yet require a nearly optimal global estimate. The key idea of the NIA is that each local sensor-dedicated filter tells the global filter 'what I've learned since the last local-to-global transmission,' whereas in other estimation architectures the local-to-global transmission consists of 'what I think now.' An algorithm based on this idea has been demonstrated on a small-scale target-tracking problem with many encouraging results. Feasibility of this approach was demonstrated by comparing NIA performance to an optimal centralized Kalman filter (lower bound) via Monte Carlo simulations.
NASA Technical Reports Server (NTRS)
Takacs, Lawrence L.; Sawyer, William; Suarez, Max J. (Editor); Fox-Rabinowitz, Michael S.
1999-01-01
This report documents the techniques used to filter quantities on a stretched grid general circulation model. Standard high-latitude filtering techniques (e.g., using an FFT (Fast Fourier Transformations) to decompose and filter unstable harmonics at selected latitudes) applied on a stretched grid are shown to produce significant distortions of the prognostic state when used to control instabilities near the pole. A new filtering technique is developed which accurately accounts for the non-uniform grid by computing the eigenvectors and eigenfrequencies associated with the stretching. A filter function, constructed to selectively damp those modes whose associated eigenfrequencies exceed some critical value, is used to construct a set of grid-spaced weights which are shown to effectively filter without distortion. Both offline and GCM (General Circulation Model) experiments are shown using the new filtering technique. Finally, a brief examination is also made on the impact of applying the Shapiro filter on the stretched grid.
A novel approach for dimension reduction of microarray.
Aziz, Rabia; Verma, C K; Srivastava, Namita
2017-12-01
This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Comparisons of linear and nonlinear pyramid schemes for signal and image processing
NASA Astrophysics Data System (ADS)
Morales, Aldo W.; Ko, Sung-Jea
1997-04-01
Linear filters banks are being used extensively in image and video applications. New research results in wavelet applications for compression and de-noising are constantly appearing in the technical literature. On the other hand, non-linear filter banks are also being used regularly in image pyramid algorithms. There are some inherent advantages in using non-linear filters instead of linear filters when non-Gaussian processes are present in images. However, a consistent way of comparing performance criteria between these two schemes has not been fully developed yet. In this paper a recently discovered tool, sample selection probabilities, is used to compare the behavior of linear and non-linear filters. In the conversion from weights of order statistics (OS) filters to coefficients of the impulse response is obtained through these probabilities. However, the reverse problem: the conversion from coefficients of the impulse response to the weights of OS filters is not yet fully understood. One of the reasons for this difficulty is the highly non-linear nature of the partitions and generating function used. In the present paper the problem is posed as an optimization of integer linear programming subject to constraints directly obtained from the coefficients of the impulse response. Although the technique to be presented in not completely refined, it certainly appears to be promising. Some results will be shown.
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
Calderón, Félix; Barros, David; Bueno, José María; Coterón, José Miguel; Fernández, Esther; Gamo, Francisco Javier; Lavandera, José Luís; León, María Luisa; Macdonald, Simon J F; Mallo, Araceli; Manzano, Pilar; Porras, Esther; Fiandor, José María; Castro, Julia
2011-10-13
In 2010, GlaxoSmithKline published the structures of 13533 chemical starting points for antimalarial lead identification. By using an agglomerative structural clustering technique followed by computational filters such as antimalarial activity, physicochemical properties, and dissimilarity to known antimalarial structures, we have identified 47 starting points for lead optimization. Their structures are provided. We invite potential collaborators to work with us to discover new clinical candidates.
A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng
To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.
Rounds, S.A.; Tiffany, B.A.; Pankow, J.F.
1993-01-01
Aerosol particles from a highway tunnel were collected on a Teflon membrane filter (TMF) using standard techniques. Sorbed organic compounds were then desorbed for 28 days by passing clean nitrogen through the filter. Volatile n-alkanes and polycyclic aromatic hydrocarbons (PAHs) were liberated from the filter quickly; only a small fraction of the less volatile ra-alkanes and PAHs were desorbed. A nonlinear least-squares method was used to fit an intraparticle diffusion model to the experimental data. Two fitting parameters were used: the gas/particle partition coefficient (Kp and an effective intraparticle diffusion coefficient (Oeff). Optimized values of Kp are in agreement with previously reported values. The slope of a correlation between the fitted values of Deff and Kp agrees well with theory, but the absolute values of Deff are a factor of ???106 smaller than predicted for sorption-retarded, gaseous diffusion. Slow transport through an organic or solid phase within the particles or preferential flow through the bed of particulate matter on the filter might be the cause of these very small effective diffusion coefficients. ?? 1993 American Chemical Society.
Track Detection in Railway Sidings Based on MEMS Gyroscope Sensors
Broquetas, Antoni; Comerón, Adolf; Gelonch, Antoni; Fuertes, Josep M.; Castro, J. Antonio; Felip, Damià; López, Miguel A.; Pulido, José A.
2012-01-01
The paper presents a two-step technique for real-time track detection in single-track railway sidings using low-cost MEMS gyroscopes. The objective is to reliably know the path the train has taken in a switch, diverted or main road, immediately after the train head leaves the switch. The signal delivered by the gyroscope is first processed by an adaptive low-pass filter that rejects noise and converts the temporal turn rate data in degree/second units into spatial turn rate data in degree/meter. The conversion is based on the travelled distance taken from odometer data. The filter is implemented to achieve a speed-dependent cut-off frequency to maximize the signal-to-noise ratio. Although direct comparison of the filtered turn rate signal with a predetermined threshold is possible, the paper shows that better detection performance can be achieved by processing the turn rate signal with a filter matched to the rail switch curvature parameters. Implementation aspects of the track detector have been optimized for real-time operation. The detector has been tested with both simulated data and real data acquired in railway campaigns. PMID:23443376
A simple integrated ratiometric wavelength monitor based on multimode interference structure
NASA Astrophysics Data System (ADS)
Hatta, Agus Muhamad; Farrell, Gerald; Wang, Qian
2008-09-01
Wavelength measurement or monitoring can be implemented using a ratiometric power measurement technique. A ratiometric wavelength monitor normally consists of a Y-branch splitter with two arms: an edge filter arm with a well defined spectral response and a reference arm or alternatively, two edge filters arms with opposite slope spectral responses. In this paper, a simple configuration for an integrated ratiometric wavelength monitor based on a single multimode interference structure is proposed. By optimizing the length of the MMI and the two output port positions, opposite spectral responses for the two output ports can be achieved. The designed structure demonstrates a spectral response suitable for wavelength measurement with potentially a 10 pm resolution over a 100 nm wavelength range.
Cellular traction force recovery: An optimal filtering approach in two-dimensional Fourier space.
Huang, Jianyong; Qin, Lei; Peng, Xiaoling; Zhu, Tao; Xiong, Chunyang; Zhang, Youyi; Fang, Jing
2009-08-21
Quantitative estimation of cellular traction has significant physiological and clinical implications. As an inverse problem, traction force recovery is essentially susceptible to noise in the measured displacement data. For traditional procedure of Fourier transform traction cytometry (FTTC), noise amplification is accompanied in the force reconstruction and small tractions cannot be recovered from the displacement field with low signal-noise ratio (SNR). To improve the FTTC process, we develop an optimal filtering scheme to suppress the noise in the force reconstruction procedure. In the framework of the Wiener filtering theory, four filtering parameters are introduced in two-dimensional Fourier space and their analytical expressions are derived in terms of the minimum-mean-squared-error (MMSE) optimization criterion. The optimal filtering approach is validated with simulations and experimental data associated with the adhesion of single cardiac myocyte to elastic substrate. The results indicate that the proposed method can highly enhance SNR of the recovered forces to reveal tiny tractions in cell-substrate interaction.
GaN nanostructure design for optimal dislocation filtering
NASA Astrophysics Data System (ADS)
Liang, Zhiwen; Colby, Robert; Wildeson, Isaac H.; Ewoldt, David A.; Sands, Timothy D.; Stach, Eric A.; García, R. Edwin
2010-10-01
The effect of image forces in GaN pyramidal nanorod structures is investigated to develop dislocation-free light emitting diodes (LEDs). A model based on the eigenstrain method and nonlocal stress is developed to demonstrate that the pyramidal nanorod efficiently ejects dislocations out of the structure. Two possible regimes of filtering behavior are found: (1) cap-dominated and (2) base-dominated. The cap-dominated regime is shown to be the more effective filtering mechanism. Optimal ranges of fabrication parameters that favor a dislocation-free LED are predicted and corroborated by resorting to available experimental evidence. The filtering probability is summarized as a function of practical processing parameters: the nanorod radius and height. The results suggest an optimal nanorod geometry with a radius of ˜50b (26 nm) and a height of ˜125b (65 nm), in which b is the magnitude of the Burgers vector for the GaN system studied. A filtering probability of greater than 95% is predicted for the optimal geometry.
NASA Astrophysics Data System (ADS)
Zeng, Ziyi; Yang, Aiying; Guo, Peng; Feng, Lihui
2018-01-01
Time-domain CD equalization using finite impulse response (FIR) filter is now a common approach for coherent optical fiber communication systems. The complex weights of FIR taps are calculated from a truncated impulse response of the CD transfer function, and the modulus of the complex weights is constant. In our work, we take the limited bandwidth of a single channel signal into account and propose weighted FIRs to improve the performance of CD equalization. The key in weighted FIR filters is the selection and optimization of weighted functions. In order to present the performance of different types of weighted FIR filters, a square-root raised cosine FIR (SRRC-FIR) and a Gaussian FIR (GS-FIR) are investigated. The optimization of square-root raised cosine FIR and Gaussian FIR are made in term of the bit rate error (BER) of QPSK and 16QAM coherent detection signal. The results demonstrate that the optimized parameters of the weighted filters are independent of the modulation format, symbol rate and the length of transmission fiber. With the optimized weighted FIRs, the BER of CD equalization signal is decreased significantly. Although this paper has investigated two types of weighted FIR filters, i.e. SRRC-FIR filter and GS-FIR filter, the principle of weighted FIR can also be extended to other symmetric functions super Gaussian function, hyperbolic secant function and etc.
NASA Astrophysics Data System (ADS)
Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo
2018-03-01
The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.
Color standardization and optimization in whole slide imaging.
Yagi, Yukako
2011-03-30
Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.
NASA Astrophysics Data System (ADS)
Goossens, Bart; Aelterman, Jan; Luong, Hiep; Pizurica, Aleksandra; Philips, Wilfried
2013-02-01
In digital cameras and mobile phones, there is an ongoing trend to increase the image resolution, decrease the sensor size and to use lower exposure times. Because smaller sensors inherently lead to more noise and a worse spatial resolution, digital post-processing techniques are required to resolve many of the artifacts. Color filter arrays (CFAs), which use alternating patterns of color filters, are very popular because of price and power consumption reasons. However, color filter arrays require the use of a post-processing technique such as demosaicing to recover full resolution RGB images. Recently, there has been some interest in techniques that jointly perform the demosaicing and denoising. This has the advantage that the demosaicing and denoising can be performed optimally (e.g. in the MSE sense) for the considered noise model, while avoiding artifacts introduced when using demosaicing and denoising sequentially. In this paper, we will continue the research line of the wavelet-based demosaicing techniques. These approaches are computationally simple and very suited for combination with denoising. Therefore, we will derive Bayesian Minimum Squared Error (MMSE) joint demosaicing and denoising rules in the complex wavelet packet domain, taking local adaptivity into account. As an image model, we will use Gaussian Scale Mixtures, thereby taking advantage of the directionality of the complex wavelets. Our results show that this technique is well capable of reconstructing fine details in the image, while removing all of the noise, at a relatively low computational cost. In particular, the complete reconstruction (including color correction, white balancing etc) of a 12 megapixel RAW image takes 3.5 sec on a recent mid-range GPU.
Optimized Orthovoltage Stereotactic Radiosurgery
NASA Astrophysics Data System (ADS)
Fagerstrom, Jessica M.
Because of its ability to treat intracranial targets effectively and noninvasively, stereotactic radiosurgery (SRS) is a prevalent treatment modality in modern radiation therapy. This work focused on SRS delivering rectangular function dose distributions, which are desirable for some targets such as those with functional tissue included within the target volume. In order to achieve such distributions, this work used fluence modulation and energies lower than those utilized in conventional SRS. In this work, the relationship between prescription isodose and dose gradients was examined for standard, unmodulated orthovoltage SRS dose distributions. Monte Carlo-generated energy deposition kernels were used to calculate 4pi, isocentric dose distributions for a polyenergetic orthovoltage spectrum, as well as monoenergetic orthovoltage beams. The relationship between dose gradients and prescription isodose was found to be field size and energy dependent, and values were found for prescription isodose that optimize dose gradients. Next, a pencil-beam model was used with a Genetic Algorithm search heuristic to optimize the spatial distribution of added tungsten filtration within apertures of cone collimators in a moderately filtered 250 kVp beam. Four cone sizes at three depths were examined with a Monte Carlo model to determine the effects of the optimized modulation compared to open cones, and the simulations found that the optimized cones were able to achieve both improved penumbra and flatness statistics at depth compared to the open cones. Prototypes of the filter designs calculated using mathematical optimization techniques and Monte Carlo simulations were then manufactured and inserted into custom built orthovoltage SRS cone collimators. A positioning system built in-house was used to place the collimator and filter assemblies temporarily in the 250 kVp beam line. Measurements were performed in water using radiochromic film scanned with both a standard white light flatbed scanner as well as a prototype laser densitometry system. Measured beam profiles showed that the modulated beams could more closely approach rectangular function dose profiles compared to the open cones. A methodology has been described and implemented to achieve optimized SRS delivery, including the development of working prototypes. Future work may include the construction of a full treatment platform.
VLBI real-time analysis by Kalman Filtering
NASA Astrophysics Data System (ADS)
Karbon, M.; Nilsson, T.; Soja, B.; Heinkelmann, R.; Raposo-Pulido, V.; Schuh, H.
2013-12-01
Geodetic Very Long Baseline Interferometry (VLBI) is one of the primary space geodetic techniques providing the full set of Earth Orientation Parameter (EOP) and is unique for observing long term Universal Time (UT1) and precession/nutation. Accurate and continuous EOP obtained in near real-time are essential for satellite based navigation and positioning and for enabling the precise tracking of interplanetary spacecrafts. To meet this necessity the International VLBI Service for Geodesy and Astrometry (IVS) increased its efforts to reduce the time span between the VLBI observations and the availability of the final results. Currently the timeliness is about two weeks, but the goal is to reduce it to less than one day with the future VGOS (VLBI2010 Global Observing System) network. The FWF project VLBI-ART contributes to this new generation VLBI system by considerably accelerating the VLBI analysis procedure through the implementation of an elaborate Kalman filter. This true real-time Kalman filter will be embedded in the Vienna VLBI Software (VieVS) as a completely automated tool with no need of human interaction. This filter also allows the prediction and combination of EOP from various space geodetic techniques by implementing stochastic models to statistically account for unpredictable changes in EOP. Additionally, atmospheric angular momenta calculated from numerical weather prediction models are introduced to support the short-term EOP prediction. To optimize the performance of the new software various investigations with real as well as simulated data are foreseen. The results are compared to the ones obtained by conventional VLBI parameter estimation methods (e.g. least squares method) and to corresponding parameter series from other techniques, such as from the Global Navigation Satellite Systems (GNSS).
Weighted hybrid technique for recommender system
NASA Astrophysics Data System (ADS)
Suriati, S.; Dwiastuti, Meisyarah; Tulus, T.
2017-12-01
Recommender system becomes very popular and has important role in an information system or webpages nowadays. A recommender system tries to make a prediction of which item a user may like based on his activity on the system. There are some familiar techniques to build a recommender system, such as content-based filtering and collaborative filtering. Content-based filtering does not involve opinions from human to make the prediction, while collaborative filtering does, so collaborative filtering can predict more accurately. However, collaborative filtering cannot give prediction to items which have never been rated by any user. In order to cover the drawbacks of each approach with the advantages of other approach, both approaches can be combined with an approach known as hybrid technique. Hybrid technique used in this work is weighted technique in which the prediction score is combination linear of scores gained by techniques that are combined.The purpose of this work is to show how an approach of weighted hybrid technique combining content-based filtering and item-based collaborative filtering can work in a movie recommender system and to show the performance comparison when both approachare combined and when each approach works alone. There are three experiments done in this work, combining both techniques with different parameters. The result shows that the weighted hybrid technique that is done in this work does not really boost the performance up, but it helps to give prediction score for unrated movies that are impossible to be recommended by only using collaborative filtering.
Assessment of Snared-Loop Technique When Standard Retrieval of Inferior Vena Cava Filters Fails
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doody, Orla, E-mail: orla_doody@hotmail.com; Noe, Geertje; Given, Mark F.
Purpose To identify the success and complications related to a variant technique used to retrieve inferior vena cava filters when simple snare approach has failed. Methods A retrospective review of all Cook Guenther Tulip filters and Cook Celect filters retrieved between July 2006 and February 2008 was performed. During this period, 130 filter retrievals were attempted. In 33 cases, the standard retrieval technique failed. Retrieval was subsequently attempted with our modified retrieval technique. Results The retrieval was successful in 23 cases (mean dwell time, 171.84 days; range, 5-505 days) and unsuccessful in 10 cases (mean dwell time, 162.2 days; range,more » 94-360 days). Our filter retrievability rates increased from 74.6% with the standard retrieval method to 92.3% when the snared-loop technique was used. Unsuccessful retrieval was due to significant endothelialization (n = 9) and caval penetration by the filter (n = 1). A single complication occurred in the group, in a patient developing pulmonary emboli after attempted retrieval. Conclusion The technique we describe increased the retrievability of the two filters studied. Hook endothelialization is the main factor resulting in failed retrieval and continues to be a limitation with these filters.« less
Optimal nonlinear filtering using the finite-volume method
NASA Astrophysics Data System (ADS)
Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.
2018-01-01
Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.
Application of recursive approaches to differential orbit correction of near Earth asteroids
NASA Astrophysics Data System (ADS)
Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria
2016-10-01
Comparison of three approaches to the differential orbit correction of celestial bodies was performed: batch least squares fitting, Kalman filter, and recursive least squares filter. The first two techniques are well known and widely used (Montenbruck, O. & Gill, E., 2000). The most attention is paid to the algorithm and details of program realization of recursive least squares filter. The filter's algorithm was derived based on recursive least squares technique that are widely used in data processing applications (Simon, D, 2006). Usage recursive least squares filter, makes possible to process a new set of observational data, without reprocessing data, which has been processed before. Specific feature of such approach is that number of observation in data set may be variable. This feature makes recursive least squares filter more flexible approach compare to batch least squares (process complete set of observations in each iteration) and Kalman filtering (suppose updating state vector on each epoch with measurements).Advantages of proposed approach are demonstrated by processing of real astrometric observations of near Earth asteroids. The case of 2008 TC3 was studied. 2008 TC3 was discovered just before its impact with Earth. There are a many closely spaced observations of 2008 TC3 on the interval between discovering and impact, which creates favorable conditions for usage of recursive approaches. Each of approaches has very similar precision in case of 2008 TC3. At the same time, recursive least squares approaches have much higher performance. Thus, this approach more favorable for orbit fitting of a celestial body, which was detected shortly before the collision or close approach to the Earth.This work was carried out at MIIGAiK and supported by the Russian Science Foundation, Project no. 14-22-00197.References:O. Montenbruck and E. Gill, "Satellite Orbits, Models, Methods and Applications," Springer-Verlag, 2000, pp. 1-369.D. Simon, "Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches",1 edition. Hoboken, N.J.: Wiley-Interscience, 2006.
MO-PIS-Exhibit Hall-01: Imaging: CT Dose Optimization Technologies I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denison, K; Smith, S
Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The imaging topic this year is CT scanner dose optimization capabilities. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Dose Optimization Capabilities of GE Computed Tomography Scanners Presentation Time: 11:15 – 11:45 AM GE Healthcare is dedicated to the delivery of high quality clinical images through the development of technologies, whichmore » optimize the application of ionizing radiation. In computed tomography, dose management solutions fall into four categories: employs projection data and statistical modeling to decrease noise in the reconstructed image - creating an opportunity for mA reduction in the acquisition of diagnostic images. Veo represents true Model Based Iterative Reconstruction (MBiR). Using high-level algorithms in tandem with advanced computing power, Veo enables lower pixel noise standard deviation and improved spatial resolution within a single image. Advanced Adaptive Image Filters allow for maintenance of spatial resolution while reducing image noise. Examples of adaptive image space filters include Neuro 3-D filters and Cardiac Noise Reduction Filters. AutomA adjusts mA along the z-axis and is the CT equivalent of auto exposure control in conventional x-ray systems. Dynamic Z-axis Tracking offers an additional opportunity for dose reduction in helical acquisitions while SmartTrack Z-axis Tracking serves to ensure beam, collimator and detector alignment during tube rotation. SmartmA provides angular mA modulation. ECG Helical Modulation reduces mA during the systolic phase of the heart cycle. SmartBeam optimization uses bowtie beam-shaping hardware and software to filter off-axis x-rays - minimizing dose and reducing x-ray scatter. The DICOM Radiation Dose Structured Report (RDSR) generates a dose report at the conclusion of every examination. Dose Check preemptively notifies CT operators when scan parameters exceed user-defined dose thresholds. DoseWatch is an information technology application providing vendor-agnostic dose tracking and analysis for CT (and all other diagnostic x-ray modalities) SnapShot Pulse improves coronary CTA dose management. VolumeShuttle uses two acquisitions to increase coverage, decrease dose, and conserve on contrast administration. Color-Coding for Kids applies the Broselow-Luten Pediatric System to facilitate pediatric emergency care and reduce medical errors. FeatherLight achieves dose optimization through pediatric procedure-based protocols. Adventure Series scanners provide a child-friendly imaging environment promoting patient cooperation with resultant reduction in retakes and patient motion. Philips CT Dose Optimization Tools and Advanced Reconstruction Presentation Time: 11:45 ‘ 12:15 PM The first part of the talk will cover “Dose Reduction and Dose Optimization Technologies” present in Philips CT Scanners. The main Technologies to be presented include: DoseRight and tube current modulation (DoseRight, Z-DOM, 3D-DOM, DoseRight Cardiac) Special acquisition modes Beam filtration and beam shapers Eclipse collimator and ClearRay collimator NanoPanel detector DoseRight will cover automatic tube current selection that automatically adjusts the dose for the individual patient. The presentation will explore the modulation techniques currently employed in Philips CT scanners and will include the algorithmic concepts as well as illustrative examples. Modulation and current selection technologies to be covered include the Automatic Current Selection component of DoseRight, ZDOM longitudinal dose modulation, 3D-DOM (combination of longitudinal and rotational dose modulation), Cardiac Dose right (an ECG based dose modulation scheme), and the DoseRight Index (DRI) IQ index. The special acquisition modes covers acquisition techniques such as prospective gating that is designed to reduce exposure to the patient through the Cardiac Step and Shoot scan mode. This mode can substitute the much higher dose retrospective scan modes for certain types of cardiac imaging. The beam filtration and beam shaper portion will discuss the variety of filtration and beam shaping configurations available on Philips scanners. This topic includes the x-ray beam characteristics, tube filtration as well as dose compensator characteristics. The Eclipse collimator, ClearRay collimator and the NanoPanel detector portion will discuss additional technologies specific to wide coverage CT that address some of the unique challenges encountered and techniques employed to optimize image quality and optimize dose utilization. The Eclipse collimator reduces extraneous exposure by actively blocking the radiation tails at either end of helical scans that do not contribute to the image generation. The ClearRay collimator and the NanoPanel detector optimize the quality of the signal that reaches the detectors by addressing the increased scattered radiation present in wide coverage and the NanoPanel detector adds superior electronic noise characteristics valuable when imaging at a low dose level. The second part of the talk will present “Advanced Reconstruction Technologies” currently available on Philips CT Scanners. The talk will cover filtered back projection (FBP), iDose4 and Iterative Model Reconstruction (IMR). Each reconstruction method will include a discussion of the algorithm as well as similarities and differences between the algorithms. Examples illustrating the merits of each algorithm will be presented, and techniques and metrics to characterize the performance of each type of algorithm will be presented. The Filtered Back projection portion will discuss and provide a brief summary of relevant standard image reconstruction techniques in common use, and discuss the common tradeoffs when using the FBP algorithm. The iDose4 portion will present the algorithms used for iDose4 as well the different levels. The meaning of different levels of iDose4 available will be presented and quantified. Guidelines for selection iDose4 parameters based on the imaging need will be explained. The different image quality goals available with iDose4 and specifically how iDose4 enables noise reduction, spatial resolution improvement or both will be explained. The approaches to leveraging the benefits of iDose4 such as improved spatial resolution, decreased noise, and artifact prevention will be described and quantified; and measurements and metrics behind the improvements will be presented. The image quality benefits in specific imaging situations as well as how to best combine the technology with other dose reduction strategies to ensure the best image quality at a given dose level will be presented. Insight into the IMR algorithm as well as contrast to the iDose4 techniques and performance characteristics will be discussed. Metrics and techniques for characterizing this class of algorithm and IQ performance will be presented. The image quality benefits and the dose reduction capabilities of IMR will be explored. Illustrative examples of the noise reduction, spatial resolution improvement, and low contrast detectability improvements of the reconstruction method will be presented: clinical cases and phantom measurements demonstrating the benefits of IMR in the areas of low dose imaging, spatial resolution and low contrast resolution are discussed and the technical details behind the measurements will be presented compared to both iDose4 and traditional filtered back projection (FBP)« less
Modeling error analysis of stationary linear discrete-time filters
NASA Technical Reports Server (NTRS)
Patel, R.; Toda, M.
1977-01-01
The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.
Novel and Advanced Techniques for Complex IVC Filter Retrieval.
Daye, Dania; Walker, T Gregory
2017-04-01
Inferior vena cava (IVC) filter placement is indicated for the treatment of venous thromboembolism (VTE) in patients with a contraindication to or a failure of anticoagulation. With the advent of retrievable IVC filters and their ease of placement, an increasing number of such filters are being inserted for prophylaxis in patients at high risk for VTE. Available data show that only a small number of these filters are retrieved within the recommended period, if at all, prompting the FDA to issue a statement on the need for their timely removal. With prolonged dwell times, advanced techniques may be needed for filter retrieval in up to 60% of the cases. In this article, we review standard and advanced IVC filter retrieval techniques including single-access, dual-access, and dissection techniques. Complicated filter retrievals carry a non-negligible risk for complications such as filter fragmentation and resultant embolization of filter components, venous pseudoaneurysms or stenoses, and breach of the integrity of the caval wall. Careful pre-retrieval assessment of IVC filter position, any significant degree of filter tilting or of hook, and/or strut epithelialization and caval wall penetration by filter components should be considered using dedicated cross-sectional imaging for procedural planning. In complex cases, the risk for retrieval complications should be carefully weighed against the risks of leaving the filter permanently indwelling. The decision to remove an embedded IVC filter using advanced techniques should be individualized to each patient and made with caution, based on the patient's age and existing comorbidities.
Günther Tulip inferior vena cava filter retrieval using a bidirectional loop-snare technique.
Ross, Jordan; Allison, Stephen; Vaidya, Sandeep; Monroe, Eric
2016-01-01
Many advanced techniques have been reported in the literature for difficult Günther Tulip filter removal. This report describes a bidirectional loop-snare technique in the setting of a fibrin scar formation around the filter leg anchors. The bidirectional loop-snare technique allows for maximal axial tension and alignment for stripping fibrin scar from the filter legs, a commonly encountered complication of prolonged dwell times.
Design of a modulated orthovoltage stereotactic radiosurgery system.
Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S
2017-07-01
To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Fault diagnosis of rolling element bearing using a new optimal scale morphology analysis method.
Yan, Xiaoan; Jia, Minping; Zhang, Wan; Zhu, Lin
2018-02-01
Periodic transient impulses are key indicators of rolling element bearing defects. Efficient acquisition of impact impulses concerned with the defects is of much concern to the precise detection of bearing defects. However, transient features of rolling element bearing are generally immersed in stochastic noise and harmonic interference. Therefore, in this paper, a new optimal scale morphology analysis method, named adaptive multiscale combination morphological filter-hat transform (AMCMFH), is proposed for rolling element bearing fault diagnosis, which can both reduce stochastic noise and reserve signal details. In this method, firstly, an adaptive selection strategy based on the feature energy factor (FEF) is introduced to determine the optimal structuring element (SE) scale of multiscale combination morphological filter-hat transform (MCMFH). Subsequently, MCMFH containing the optimal SE scale is applied to obtain the impulse components from the bearing vibration signal. Finally, fault types of bearing are confirmed by extracting the defective frequency from envelope spectrum of the impulse components. The validity of the proposed method is verified through the simulated analysis and bearing vibration data derived from the laboratory bench. Results indicate that the proposed method has a good capability to recognize localized faults appeared on rolling element bearing from vibration signal. The study supplies a novel technique for the detection of faulty bearing. Copyright © 2018. Published by Elsevier Ltd.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M
2018-04-12
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.
2018-01-01
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114
Optimizing of a high-order digital filter using PSO algorithm
NASA Astrophysics Data System (ADS)
Xu, Fuchun
2018-04-01
A self-adaptive high-order digital filter, which offers opportunity to simplify the process of tuning parameters and further improve the noise performance, is presented in this paper. The parameters of traditional digital filter are mainly tuned by complex calculation, whereas this paper presents a 5th order digital filter to obtain outstanding performance and the parameters of the proposed filter are optimized by swarm intelligent algorithm. Simulation results with respect to the proposed 5th order digital filter, SNR>122dB and the noise floor under -170dB are obtained in frequency range of [5-150Hz]. In further simulation, the robustness of the proposed 5th order digital is analyzed.
Reducing multi-sensor data to a single time course that reveals experimental effects
2013-01-01
Background Multi-sensor technologies such as EEG, MEG, and ECoG result in high-dimensional data sets. Given the high temporal resolution of such techniques, scientific questions very often focus on the time-course of an experimental effect. In many studies, researchers focus on a single sensor or the average over a subset of sensors covering a “region of interest” (ROI). However, single-sensor or ROI analyses ignore the fact that the spatial focus of activity is constantly changing, and fail to make full use of the information distributed over the sensor array. Methods We describe a technique that exploits the optimality and simplicity of matched spatial filters in order to reduce experimental effects in multivariate time series data to a single time course. Each (multi-sensor) time sample of each trial is replaced with its projection onto a spatial filter that is matched to an observed experimental effect, estimated from the remaining trials (Effect-Matched Spatial filtering, or EMS filtering). The resulting set of time courses (one per trial) can be used to reveal the temporal evolution of an experimental effect, which distinguishes this approach from techniques that reveal the temporal evolution of an anatomical source or region of interest. Results We illustrate the technique with data from a dual-task experiment and use it to track the temporal evolution of brain activity during the psychological refractory period. We demonstrate its effectiveness in separating the means of two experimental conditions, and in significantly improving the signal-to-noise ratio at the single-trial level. It is fast to compute and results in readily-interpretable time courses and topographies. The technique can be applied to any data-analysis question that can be posed independently at each sensor, and we provide one example, using linear regression, that highlights the versatility of the technique. Conclusion The approach described here combines established techniques in a way that strikes a balance between power, simplicity, speed of processing, and interpretability. We have used it to provide a direct view of parallel and serial processes in the human brain that previously could only be measured indirectly. An implementation of the technique in MatLab is freely available via the internet. PMID:24125590
Cope, Davis; Blakeslee, Barbara; McCourt, Mark E
2013-05-01
The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Advanced Techniques for Removal of Retrievable Inferior Vena Cava Filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliescu, Bogdan; Haskal, Ziv J., E-mail: ziv2@mac.com
Inferior vena cava (IVC) filters have proven valuable for the prevention of primary or recurrent pulmonary embolism in selected patients with or at high risk for venous thromboembolic disease. Their use has become commonplace, and the numbers implanted increase annually. During the last 3 years, in the United States, the percentage of annually placed optional filters, i.e., filters than can remain as permanent filters or potentially be retrieved, has consistently exceeded that of permanent filters. In parallel, the complications of long- or short-term filtration have become increasingly evident to physicians, regulatory agencies, and the public. Most filter removals are uneventful,more » with a high degree of success. When routine filter-retrieval techniques prove unsuccessful, progressively more advanced tools and skill sets must be used to enhance filter-retrieval success. These techniques should be used with caution to avoid damage to the filter or cava during IVC retrieval. This review describes the complex techniques for filter retrieval, including use of additional snares, guidewires, angioplasty balloons, and mechanical and thermal approaches as well as illustrates their specific application.« less
Targeted Silver Nanoparticles for Dual-Energy Breast X-Ray Imaging
2013-03-01
imaging parameters. In addition, Ag performs better than I when imaging at the optimal conditions for I. For example, using a rhodium filter, the...Laboratory. XCOM: Photon Cross Sections Database. Retrieved December 10, 2011 2. Boone J.M. , Fewell, T.R., Jennings, R.J. Molybdenum, rhodium , and tungsten...and a 27 kVp low-energy beam with rhodium filtration, at a dose distribution of 50:50. This low-energy technique is a classic example of an
Offshore seismicity in the southeastern sea of Korea
NASA Astrophysics Data System (ADS)
Park, H.; Kang, T. S.
2017-12-01
The offshore southeastern sea area of Korea appear to have a slightly higher seismicity compared to the rest of the Korean Peninsula. According to the earthquake report by Korean Meteorological Administration (KMA), earthquakes over ML 3 has persistently occurred over once a year during the last ten years. In this study, we used 33 events in KMA catalog, which occurred in the offshore Ulsan (35.0°N-35.85°N, 129.45°E-130.75°E) from April 2007 to June 2017, as mother earthquakes. The waveform matching filter technique was used to precisely detect microearthquakes (child earthquakes) that occurred after mother earthquakes. It is the optimal linear filter for maximizing the signal-to-noise ratio in the presence of additive stochastic noise. Initially, we used the continuous seismic waveforms available from KMA and the Korea Institute of Geosciences and Mineral Resources. We added the data of F-net to increase the reliability of the results. The detected events were located by using P- and S-wave arrival times. The hypocentral depths were constrained by an iterative optimal solution technique which is proven to be effective under the poorly known structure. Focal mechanism solutions were obtained from the analysis of P-wave first-motion polarities. Seismicity patterns of microearthquakes and their focal mechanism results were analyzed to understand their seismogenic characteristics and their relationship to subsea seismotectonic structures.
NASA Technical Reports Server (NTRS)
Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle;
2016-01-01
The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.
Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.
2011-01-01
An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (≤ 32 cm−1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm−1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445
Posham, Raghuram; Fischman, Aaron M; Nowakowski, Francis S; Bishay, Vivian L; Biederman, Derek M; Virk, Jaskirat S; Kim, Edward; Patel, Rahul S; Lookstein, Robert A
2017-06-01
This report describes the technical feasibility of using the filter eversion technique after unsuccessful retrieval attempts of Option and Option ELITE (Argon Medical Devices, Inc, Athens, Texas) inferior vena cava (IVC) filters. This technique entails the use of endoscopic forceps to evert this specific brand of IVC filter into a sheath inserted into the common femoral vein, in the opposite direction in which the filter is designed to be removed. Filter eversion was attempted in 25 cases with a median dwell time of 134 days (range, 44-2,124 d). Retrieval success was 100% (25/25 cases), with an overall complication rate of 8%. This technique warrants further study. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung
2005-05-01
An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.
Restrepo-Agudelo, Sebastian; Roldan-Vasco, Sebastian; Ramirez-Arbelaez, Lina; Cadavid-Arboleda, Santiago; Perez-Giraldo, Estefania; Orozco-Duque, Andres
2017-08-01
The visual inspection is a widely used method for evaluating the surface electromyographic signal (sEMG) during deglutition, a process highly dependent of the examiners expertise. It is desirable to have a less subjective and automated technique to improve the onset detection in swallowing related muscles, which have a low signal-to-noise ratio. In this work, we acquired sEMG measured in infrahyoid muscles with high baseline noise of ten healthy adults during water swallowing tasks. Two methods were applied to find the combination of cutoff frequencies that achieve the most accurate onset detection: discrete wavelet decomposition based method and fixed steps variations of low and high cutoff frequencies of a digital bandpass filter. Teager-Kaiser Energy operator, root mean square and simple threshold method were applied for both techniques. Results show a narrowing of the effective bandwidth vs. the literature recommended parameters for sEMG acquisition. Both level 3 decomposition with mother wavelet db4 and bandpass filter with cutoff frequencies between 130 and 180Hz were optimal for onset detection in infrahyoid muscles. The proposed methodologies recognized the onset time with predictive power above 0.95, that is similar to previous findings but in larger and more superficial muscles in limbs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kinnarinen, Teemu; Lubieniecki, Boguslaw; Holliday, Lloyd; Helsto, Jaakko-Juhani; Häkkinen, Antti
2015-03-01
Dry cake disposal is the preferred technique for the disposal of bauxite residue, when considering environmental issues together with possible future utilisation of the solids. In order to perform dry cake disposal in an economical way, the deliquoring of the residue must be carried out efficiently, and it is also important to wash the obtained solids well to minimise the amount of soluble soda within the solids. The study presented in this article aims at detecting the most important variables influencing the deliquoring and washing of bauxite residue, performed with a horizontal membrane filter press and by determining the optimal washing conditions. The results obtained from pilot-scale experiments are evaluated by considering the properties of the solids, for instance, the residual alkali and aluminium content, as well as the consumption of wash liquid. Two different cake washing techniques, namely classic washing and channel washing, are also used and their performances compared. The results show that cake washing can be performed successfully in a horizontal membrane filter press, and significant improvements in the recovery of alkali and aluminium can be achieved compared with pressure filtration carried out without washing, or especially compared with the more traditionally used vacuum filtration. © The Author(s) 2015.
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
Studies of EGRET sources with a novel image restoration technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajima, Hiroyasu; Cohen-Tanugi, Johann; Kamae, Tuneyoshi
2007-07-12
We have developed an image restoration technique based on the Richardson-Lucy algorithm optimized for GLAST-LAT image analysis. Our algorithm is original since it utilizes the PSF (point spread function) that is calculated for each event. This is critical for EGRET and GLAST-LAT image analysis since the PSF depends on the energy and angle of incident gamma-rays and varies by more than one order of magnitude. EGRET and GLAST-LAT image analysis also faces Poisson noise due to low photon statistics. Our technique incorporates wavelet filtering to minimize noise effects. We present studies of EGRET sources using this novel image restoration techniquemore » for possible identification of extended gamma-ray sources.« less
Community Detection for Correlation Matrices
NASA Astrophysics Data System (ADS)
MacMahon, Mel; Garlaschelli, Diego
2015-04-01
A challenging problem in the study of complex systems is that of resolving, without prior information, the emergent, mesoscopic organization determined by groups of units whose dynamical activity is more strongly correlated internally than with the rest of the system. The existing techniques to filter correlations are not explicitly oriented towards identifying such modules and can suffer from an unavoidable information loss. A promising alternative is that of employing community detection techniques developed in network theory. Unfortunately, this approach has focused predominantly on replacing network data with correlation matrices, a procedure that we show to be intrinsically biased because of its inconsistency with the null hypotheses underlying the existing algorithms. Here, we introduce, via a consistent redefinition of null models based on random matrix theory, the appropriate correlation-based counterparts of the most popular community detection techniques. Our methods can filter out both unit-specific noise and system-wide dependencies, and the resulting communities are internally correlated and mutually anticorrelated. We also implement multiresolution and multifrequency approaches revealing hierarchically nested subcommunities with "hard" cores and "soft" peripheries. We apply our techniques to several financial time series and identify mesoscopic groups of stocks which are irreducible to a standard, sectorial taxonomy; detect "soft stocks" that alternate between communities; and discuss implications for portfolio optimization and risk management.
NASA Astrophysics Data System (ADS)
Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu
2017-09-01
A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.
TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, J; Culberson, W; Bender, E
2016-06-15
Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less
A New Strategy for ECG Baseline Wander Elimination Using Empirical Mode Decomposition
NASA Astrophysics Data System (ADS)
Shahbakhti, Mohammad; Bagheri, Hamed; Shekarchi, Babak; Mohammadi, Somayeh; Naji, Mohsen
2016-06-01
Electrocardiogram (ECG) signals might be affected by various artifacts and noises that have biological and external sources. Baseline wander (BW) is a low-frequency artifact that may be caused by breathing, body movements and loose sensor contact. In this paper, a novel method based on empirical mode decomposition (EMD) for removal of baseline noise from ECG is presented. When compared to other EMD-based methods, the novelty of this research is to reach the optimized number of decomposed levels for ECG BW de-noising using mean power frequency (MPF), while the reduction of processing time is considered. To evaluate the performance of the proposed method, a fifth-order Butterworth high pass filtering (BHPF) with cut-off frequency at 0.5Hz and wavelet approach are applied. Three performance indices, signal-to-noise ratio (SNR), mean square error (MSE) and correlation coefficient (CC), between pure and filtered signals have been utilized for qualification of presented techniques. Results suggest that the EMD-based method outperforms the other filtering method.
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm
2015-01-01
This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Optimal and fast E/B separation with a dual messenger field
NASA Astrophysics Data System (ADS)
Kodi Ramanah, Doogesh; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-05-01
We adapt our recently proposed dual messenger algorithm for spin field reconstruction and showcase its efficiency and effectiveness in Wiener filtering polarized cosmic microwave background (CMB) maps. Unlike conventional preconditioned conjugate gradient (PCG) solvers, our preconditioner-free technique can deal with high-resolution joint temperature and polarization maps with inhomogeneous noise distributions and arbitrary mask geometries with relative ease. Various convergence diagnostics illustrate the high quality of the dual messenger reconstruction. In contrast, the PCG implementation fails to converge to a reasonable solution for the specific problem considered. The implementation of the dual messenger method is straightforward and guarantees numerical stability and convergence. We show how the algorithm can be modified to generate fluctuation maps, which, combined with the Wiener filter solution, yield unbiased constrained signal realizations, consistent with observed data. This algorithm presents a pathway to exact global analyses of high-resolution and high-sensitivity CMB data for a statistically optimal separation of E and B modes. It is therefore relevant for current and next-generation CMB experiments, in the quest for the elusive primordial B-mode signal.
Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.
Wang, Yubo; Veluvolu, Kalyana C
2017-01-01
The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.
Schneider, Richard R.; Hauer, Grant; Farr, Dan; Adamowicz, W. L.; Boutin, Stan
2011-01-01
Recent studies have shown that conservation gains can be achieved when the spatial distributions of biological benefits and economic costs are incorporated in the conservation planning process. Using Alberta, Canada, as a case study we apply these techniques in the context of coarse-filter reserve design. Because targets for ecosystem representation and other coarse-filter design elements are difficult to define objectively we use a trade-off analysis to systematically explore the relationship between conservation targets and economic opportunity costs. We use the Marxan conservation planning software to generate reserve designs at each level of conservation target to ensure that our quantification of conservation and economic outcomes represents the optimal allocation of resources in each case. Opportunity cost is most affected by the ecological representation target and this relationship is nonlinear. Although petroleum resources are present throughout most of Alberta, and include highly valuable oil sands deposits, our analysis indicates that over 30% of public lands could be protected while maintaining access to more than 97% of the value of the region's resources. Our case study demonstrates that optimal resource allocation can be usefully employed to support strategic decision making in the context of land-use planning, even when conservation targets are not well defined. PMID:21858046
Assessment of intermittently loaded woodchip and sand filters to treat dairy soiled water.
Murnane, J G; Brennan, R B; Healy, M G; Fenton, O
2016-10-15
Land application of dairy soiled water (DSW) is expensive relative to its nutrient replacement value. The use of aerobic filters is an effective alternative method of treatment and potentially allows the final effluent to be reused on the farm. Knowledge gaps exist concerning the optimal design and operation of filters for the treatment of DSW. To address this, 18 laboratory-scale filters, with depths of either 0.6 m or 1 m, were intermittently loaded with DSW over periods of up to 220 days to evaluate the impacts of depth (0.6 m versus 1 m), organic loading rates (OLRs) (50 versus 155 g COD m(-2) d(-1)), and media type (woodchip versus sand) on organic, nutrient and suspended solids (SS) removals. The study found that media depth was important in contaminant removal in woodchip filters. Reductions of 78% chemical oxygen demand (COD), 95% SS, 85% total nitrogen (TN), 82% ammonium-nitrogen (NH4N), 50% total phosphorus (TP), and 54% dissolved reactive phosphorus (DRP) were measured in 1 m deep woodchip filters, which was greater than the reductions in 0.6 m deep woodchip filters. Woodchip filters also performed optimally when loaded at a high OLR (155 g COD m(-2) d(-1)), although the removal mechanism was primarily physical (i.e. straining) as opposed to biological. When operated at the same OLR and when of the same depth, the sand filters had better COD removals (96%) than woodchip (74%), but there was no significant difference between them in the removal of SS and NH4N. However, the likelihood of clogging makes sand filters less desirable than woodchip filters. Using the optimal designs of both configurations, the filter area required per cow for a woodchip filter is more than four times less than for a sand filter. Therefore, this study found that woodchip filters are more economically and environmentally effective in the treatment of DSW than sand filters, and optimal performance may be achieved using woodchip filters with a depth of at least 1 m, operated at an OLR of 155 g COD m(-2) d(-1). Copyright © 2016 Elsevier Ltd. All rights reserved.
Performance evaluation of an asynchronous multisensor track fusion filter
NASA Astrophysics Data System (ADS)
Alouani, Ali T.; Gray, John E.; McCabe, D. H.
2003-08-01
Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.
Digital controllers for VTOL aircraft
NASA Technical Reports Server (NTRS)
Stengel, R. F.; Broussard, J. R.; Berry, P. W.
1976-01-01
Using linear-optimal estimation and control techniques, digital-adaptive control laws have been designed for a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. Two distinct discrete-time control laws are designed to interface with velocity-command and attitude-command guidance logic, and each incorporates proportional-integral compensation for non-zero-set-point regulation, as well as reduced-order Kalman filters for sensor blending and noise rejection. Adaptation to flight condition is achieved with a novel gain-scheduling method based on correlation and regression analysis. The linear-optimal design approach is found to be a valuable tool in the development of practical multivariable control laws for vehicles which evidence significant coupling and insufficient natural stability.
CCD filter and transform techniques for interference excision
NASA Technical Reports Server (NTRS)
Borsuk, G. M.; Dewitt, R. N.
1976-01-01
The theoretical and some experimental results of a study aimed at applying CCD filter and transform techniques to the problem of interference excision within communications channels were presented. Adaptive noise (interference) suppression was achieved by the modification of received signals such that they were orthogonal to the recently measured noise field. CCD techniques were examined to develop real-time noise excision processing. They were recursive filters, circulating filter banks, transversal filter banks, an optical implementation of the chirp Z transform, and a CCD analog FFT.
NASA Astrophysics Data System (ADS)
Singh, Jaskaran; Darpe, A. K.; Singh, S. P.
2018-02-01
Local damage in rolling element bearings usually generates periodic impulses in vibration signals. The severity, repetition frequency and the fault excited resonance zone by these impulses are the key indicators for diagnosing bearing faults. In this paper, a methodology based on over complete rational dilation wavelet transform (ORDWT) is proposed, as it enjoys a good shift invariance. ORDWT offers flexibility in partitioning the frequency spectrum to generate a number of subbands (filters) with diverse bandwidths. The selection of the optimal filter that perfectly overlaps with the bearing fault excited resonance zone is based on the maximization of a proposed impulse detection measure "Temporal energy operated auto correlated kurtosis". The proposed indicator is robust and consistent in evaluating the impulsiveness of fault signals in presence of interfering vibration such as heavy background noise or sporadic shocks unrelated to the fault or normal operation. The structure of the proposed indicator enables it to be sensitive to fault severity. For enhanced fault classification, an autocorrelation of the energy time series of the signal filtered through the optimal subband is proposed. The application of the proposed methodology is validated on simulated and experimental data. The study shows that the performance of the proposed technique is more robust and consistent in comparison to the original fast kurtogram and wavelet kurtogram.
NASA Astrophysics Data System (ADS)
Jia, Chaoqing; Hu, Jun; Chen, Dongyan; Liu, Yurong; Alsaadi, Fuad E.
2018-07-01
In this paper, we discuss the event-triggered resilient filtering problem for a class of time-varying systems subject to stochastic uncertainties and successive packet dropouts. The event-triggered mechanism is employed with hope to reduce the communication burden and save network resources. The stochastic uncertainties are considered to describe the modelling errors and the phenomenon of successive packet dropouts is characterized by a random variable obeying the Bernoulli distribution. The aim of the paper is to provide a resilient event-based filtering approach for addressed time-varying systems such that, for all stochastic uncertainties, successive packet dropouts and filter gain perturbation, an optimized upper bound of the filtering error covariance is obtained by designing the filter gain. Finally, simulations are provided to demonstrate the effectiveness of the proposed robust optimal filtering strategy.
Optimal filter parameters for low SNR seismograms as a function of station and event location
NASA Astrophysics Data System (ADS)
Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.
1999-06-01
Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.
Comparative Study of Speckle Filtering Methods in PolSAR Radar Images
NASA Astrophysics Data System (ADS)
Boutarfa, S.; Bouchemakh, L.; Smara, Y.
2015-04-01
Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.
Development of a fast and feasible spectrum modeling technique for flattening filter free beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Woong; Bush, Karl; Mok, Ed
Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
NASA Astrophysics Data System (ADS)
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing
2014-12-01
Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.
Design technique for all-dielectric non-polarizing beam splitter plate
NASA Astrophysics Data System (ADS)
Rizea, A.
2012-03-01
There are many situations when, for the proper working, an opto-electronic device requiring optical components does not change the polarization state of light after a reflection, splitting or filtering. In this paper, a design for a non-polarizing beam splitter plate is proposed. Based on certain optical properties of homogeneous dielectric materials we will establish a reliable thin film package formula, excellent for the start of optimization to obtain a 20-nm bandwidth non-polarizing beam splitter.
Robotic fish tracking method based on suboptimal interval Kalman filter
NASA Astrophysics Data System (ADS)
Tong, Xiaohong; Tang, Chao
2017-11-01
Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.
Walther, Andreas; Rippe, Lars; Wang, Lihong V; Andersson-Engels, Stefan; Kröll, Stefan
2017-10-01
Despite the important medical implications, it is currently an open task to find optical non-invasive techniques that can image deep organs in humans. Addressing this, photo-acoustic tomography (PAT) has received a great deal of attention in the past decade, owing to favorable properties like high contrast and high spatial resolution. However, even with optimal components PAT cannot penetrate beyond a few centimeters, which still presents an important limitation of the technique. Here, we calculate the absorption contrast levels for PAT and for ultrasound optical tomography (UOT) and compare them to their relevant noise sources as a function of imaging depth. The results indicate that a new development in optical filters, based on rare-earth-ion crystals, can push the UOT technique significantly ahead of PAT. Such filters allow the contrast-to-noise ratio for UOT to be up to three orders of magnitude better than for PAT at depths of a few cm into the tissue. It also translates into a significant increase of the image depth of UOT compared to PAT, enabling deep organs to be imaged in humans in real time. Furthermore, such spectral holeburning filters are not sensitive to speckle decorrelation from the tissue and can operate at nearly any angle of incident light, allowing good light collection. We theoretically demonstrate the improved performance in the medically important case of non-invasive optical imaging of the oxygenation level of the frontal part of the human myocardial tissue. Our results indicate that further studies on UOT are of interest and that the technique may have large impact on future directions of biomedical optics.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
NASA Astrophysics Data System (ADS)
Tseng, Chien-Hsun
2015-02-01
The technique of multidimensional wave digital filtering (MDWDF) that builds on traveling wave formulation of lumped electrical elements, is successfully implemented on the study of dynamic responses of symmetrically laminated composite plate based on the first order shear deformation theory. The philosophy applied for the first time in this laminate mechanics relies on integration of certain principles involving modeling and simulation, circuit theory, and MD digital signal processing to provide a great variety of outstanding features. Especially benefited by the conservation of passivity gives rise to a nonlinear programming problem (NLP) for the issue of numerical stability of a MD discrete system. Adopting the augmented Lagrangian genetic algorithm, an effective optimization technique for rapidly achieving solution spaces of NLP models, numerical stability of the MDWDF network is well received at all time by the satisfaction of the Courant-Friedrichs-Levy stability criterion with the least restriction. In particular, optimum of the NLP has led to the optimality of the network in terms of effectively and accurately predicting the desired fundamental frequency, and thus to give an insight into the robustness of the network by looking at the distribution of system energies. To further explore the application of the optimum network, more numerical examples are engaged in efforts to achieve a qualitative understanding of the behavior of the laminar system. These are carried out by investigating various effects based on different stacking sequences, stiffness and span-to-thickness ratios, mode shapes and boundary conditions. Results are scrupulously validated by cross referencing with early published works, which show that the present method is in excellent agreement with other numerical and analytical methods.
NASA Technical Reports Server (NTRS)
Mcfarland, M. J.
1975-01-01
Horizontal wind components, potential temperature, and mixing ratio fields associated with a severe storm environment in the south central U.S. were analyzed from synoptic upper air observations with a nonhomogeneous, anisotropic weighting function. Each data field was filtered with variational optimization analysis techniques. Variational optimization analysis was also performed on the vertical motion field and was used to produce advective forecasts of the potential temperature and mixing ratio fields. Results show that the dry intrusion is characterized by warm air, the advection of which produces a well-defined upward motion pattern. A corresponding downward motion pattern comprising a deep vertical circulation in the warm air sector of the low pressure system was detected. The axes alignment of maximum dry and warm advection with the axis of the tornado-producing squall line also resulted.
Integrated approach for automatic target recognition using a network of collaborative sensors.
Mahalanobis, Abhijit; Van Nevel, Alan
2006-10-01
We introduce what is believed to be a novel concept by which several sensors with automatic target recognition (ATR) capability collaborate to recognize objects. Such an approach would be suitable for netted systems in which the sensors and platforms can coordinate to optimize end-to-end performance. We use correlation filtering techniques to facilitate the development of the concept, although other ATR algorithms may be easily substituted. Essentially, a self-configuring geometry of netted platforms is proposed that positions the sensors optimally with respect to each other, and takes into account the interactions among the sensor, the recognition algorithms, and the classes of the objects to be recognized. We show how such a paradigm optimizes overall performance, and illustrate the collaborative ATR scheme for recognizing targets in synthetic aperture radar imagery by using viewing position as a sensor parameter.
Multiscale morphological filtering for analysis of noisy and complex images
NASA Astrophysics Data System (ADS)
Kher, A.; Mitra, S.
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
Multiscale Morphological Filtering for Analysis of Noisy and Complex Images
NASA Technical Reports Server (NTRS)
Kher, A.; Mitra, S.
1993-01-01
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
NASA Astrophysics Data System (ADS)
Bílek, Petr; Hrůza, Jakub
2018-06-01
This paper deals with an optimization of the cleaning process on a liquid flat-sheet filter accompanied by visualization of the inlet side of a filter. The cleaning process has a crucial impact on the hydrodynamic properties of flat-sheet filters. Cleaning methods avoid depositing of particles on the filter surface and forming a filtration cake. Visualization significantly helps to optimize the cleaning methods, because it brings new overall view on the filtration process in time. The optical method, described in the article, enables to see flow behaviour in a thin laser sheet on the inlet side of a tested filter during the cleaning process. Visualization is a strong tool for investigation of the processes on filters in details and it is also possible to determine concentration of particles after an image analysis. The impact of air flow rate, inverse pressure drop and duration on the cleaning mechanism is investigated in the article. Images of the cleaning process are compared to the hydrodynamic data. The tests are carried out on a pilot filtration setup for waste water treatment.
Powerline noise elimination in biomedical signals via blind source separation and wavelet analysis.
Akwei-Sekyere, Samuel
2015-01-01
The distortion of biomedical signals by powerline noise from recording biomedical devices has the potential to reduce the quality and convolute the interpretations of the data. Usually, powerline noise in biomedical recordings are extinguished via band-stop filters. However, due to the instability of biomedical signals, the distribution of signals filtered out may not be centered at 50/60 Hz. As a result, self-correction methods are needed to optimize the performance of these filters. Since powerline noise is additive in nature, it is intuitive to model powerline noise in a raw recording and subtract it from the raw data in order to obtain a relatively clean signal. This paper proposes a method that utilizes this approach by decomposing the recorded signal and extracting powerline noise via blind source separation and wavelet analysis. The performance of this algorithm was compared with that of a 4th order band-stop Butterworth filter, empirical mode decomposition, independent component analysis and, a combination of empirical mode decomposition with independent component analysis. The proposed method was able to expel sinusoidal signals within powerline noise frequency range with higher fidelity in comparison with the mentioned techniques, especially at low signal-to-noise ratio.
Color filter array design based on a human visual model
NASA Astrophysics Data System (ADS)
Parmar, Manu; Reeves, Stanley J.
2004-05-01
To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.
Application of speed-enhanced spatial domain correlation filters for real-time security monitoring
NASA Astrophysics Data System (ADS)
Gardezi, Akber; Bangalore, Nagachetan; Al-Kandri, Ahmed; Birch, Philip; Young, Rupert; Chatwin, Chris
2011-11-01
A speed enhanced space variant correlation filer which has been designed to be invariant to change in orientation and scale of the target object but also to be spatially variant, i.e. the filter function becoming dependant on local clutter conditions within the image. The speed enhancement of the filter is due to the use of optimization techniques employing low-pass filtering to restrict kernel movement to be within regions of interest. The detection and subsequent identification capability of the two-stage process has been evaluated in highly cluttered backgrounds using both visible and thermal imagery acquired from civil and defense domains along with associated training data sets for target detection and classification. In this paper a series of tests have been conducted in multiple scenarios relating to situations that pose a security threat. Performance matrices comprised of peak-to-correlation energy (PCE) and peak-to-side lobe ratio (PSR) measurements of the correlation output have been calculated to allow the definition of a recognition criterion. The hardware implementation of the system has been discussed in terms of Field Programmable Gate Array (FPGA) chipsets with implementation bottle necks and their solution being considered.
Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V
2015-01-01
Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.
Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm
NASA Astrophysics Data System (ADS)
Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui
2017-05-01
The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.
Simplification of the Kalman filter for meteorological data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1991-01-01
The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Welch, Greg
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
Learning-based 3D surface optimization from medical image reconstruction
NASA Astrophysics Data System (ADS)
Wei, Mingqiang; Wang, Jun; Guo, Xianglin; Wu, Huisi; Xie, Haoran; Wang, Fu Lee; Qin, Jing
2018-04-01
Mesh optimization has been studied from the graphical point of view: It often focuses on 3D surfaces obtained by optical and laser scanners. This is despite the fact that isosurfaced meshes of medical image reconstruction suffer from both staircases and noise: Isotropic filters lead to shape distortion, while anisotropic ones maintain pseudo-features. We present a data-driven method for automatically removing these medical artifacts while not introducing additional ones. We consider mesh optimization as a combination of vertex filtering and facet filtering in two stages: Offline training and runtime optimization. In specific, we first detect staircases based on the scanning direction of CT/MRI scanners, and design a staircase-sensitive Laplacian filter (vertex-based) to remove them; and then design a unilateral filtered facet normal descriptor (uFND) for measuring the geometry features around each facet of a given mesh, and learn the regression functions from a set of medical meshes and their high-resolution reference counterparts for mapping the uFNDs to the facet normals of the reference meshes (facet-based). At runtime, we first perform staircase-sensitive Laplacian filter on an input MC (Marching Cubes) mesh, and then filter the mesh facet normal field using the learned regression functions, and finally deform it to match the new normal field for obtaining a compact approximation of the high-resolution reference model. Tests show that our algorithm achieves higher quality results than previous approaches regarding surface smoothness and surface accuracy.
A Structural and Content-Based Analysis for Web Filtering.
ERIC Educational Resources Information Center
Lee, P. Y.; Hui, S. C.; Fong, A. C. M.
2003-01-01
Presents an analysis of the distinguishing features of pornographic Web pages so that effective filtering techniques can be developed. Surveys the existing techniques for Web content filtering and describes the implementation of a Web content filtering system that uses an artificial neural network. (Author/LRW)
Spectral analysis and filtering techniques in digital spatial data processing
Pan, Jeng-Jong
1989-01-01
A filter toolbox has been developed at the EROS Data Center, US Geological Survey, for retrieving or removing specified frequency information from two-dimensional digital spatial data. This filter toolbox provides capabilities to compute the power spectrum of a given data and to design various filters in the frequency domain. Three types of filters are available in the toolbox: point filter, line filter, and area filter. Both the point and line filters employ Gaussian-type notch filters, and the area filter includes the capabilities to perform high-pass, band-pass, low-pass, and wedge filtering techniques. These filters are applied for analyzing satellite multispectral scanner data, airborne visible and infrared imaging spectrometer (AVIRIS) data, gravity data, and the digital elevation models (DEM) data. -from Author
Edge Preserved Speckle Noise Reduction Using Integrated Fuzzy Filters
Dewal, M. L.; Rohit, Manoj Kumar
2014-01-01
Echocardiographic images are inherent with speckle noise which makes visual reading and analysis quite difficult. The multiplicative speckle noise masks finer details, necessary for diagnosis of abnormalities. A novel speckle reduction technique based on integration of geometric, wiener, and fuzzy filters is proposed and analyzed in this paper. The denoising applications of fuzzy filters are studied and analyzed along with 26 denoising techniques. It is observed that geometric filter retains noise and, to address this issue, wiener filter is embedded into the geometric filter during iteration process. The performance of geometric-wiener filter is further enhanced using fuzzy filters and the proposed despeckling techniques are called integrated fuzzy filters. Fuzzy filters based on moving average and median value are employed in the integrated fuzzy filters. The performances of integrated fuzzy filters are tested on echocardiographic images and synthetic images in terms of image quality metrics. It is observed that the performance parameters are highest in case of integrated fuzzy filters in comparison to fuzzy and geometric-fuzzy filters. The clinical validation reveals that the output images obtained using geometric-wiener, integrated fuzzy, nonlocal means, and details preserving anisotropic diffusion filters are acceptable. The necessary finer details are retained in the denoised echocardiographic images. PMID:27437499
Performance Limits of Non-Line-of-Sight Optical Communications
2015-05-01
high efficiency solar blind photo detectors. In this project, we address the main challenges towards optimizing the UV communication system...LEDs), solar blind filters, and high efficiency solar blind photo detectors. In this project, we address the main challenges towards optimizing the UV...solar blind filters, and high efficiency solar blind photo detectors. In this project, we address the main challenges towards optimizing the UV
NASA Astrophysics Data System (ADS)
Toosi, Siavash; Larsson, Johan
2017-11-01
The accuracy of an LES depends directly on the accuracy of the resolved part of the turbulence. The continuing increase in computational power enables the application of LES to increasingly complex flow problems for which the LES community lacks the experience of knowing what the ``optimal'' or even an ``acceptable'' grid (or equivalently filter-width distribution) is. The goal of this work is to introduce a systematic approach to finding the ``optimal'' grid/filter-width distribution and their ``optimal'' anisotropy. The method is tested first on the turbulent channel flow, mainly to see if it is able to predict the right anisotropy of the filter/grid, and then on the more complicated case of flow over a backward-facing step, to test its ability to predict the right distribution and anisotropy of the filter/grid simultaneously, hence leading to a converged solution. This work has been supported by the Naval Air Warfare Center Aircraft Division at Pax River, MD, under contract N00421132M021. Computing time has been provided by the University of Maryland supercomputing resources (http://hpcc.umd.edu).
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Design and optimization of cascaded DCG based holographic elements for spectrum-splitting PV systems
NASA Astrophysics Data System (ADS)
Wu, Yuechen; Chrysler, Benjamin; Pelaez, Silvana Ayala; Kostuk, Raymond K.
2017-09-01
In this work, the technique of designing and optimizing broadband volume transmission holograms using dichromate gelatin (DCG) is summarized for solar spectrum-splitting application. Spectrum splitting photovoltaic system uses a series of single bandgap PV cells that have different spectral conversion efficiency properties to more fully utilize the solar spectrum. In such a system, one or more high performance optical filters are usually required to split the solar spectrum and efficiently send them to the corresponding PV cells. An ideal spectral filter should have a rectangular shape with sharp transition wavelengths. DCG is a near ideal holographic material for solar applications as it can achieve high refractive index modulation, low absorption and scattering properties and long-term stability to solar exposure after sealing. In this research, a methodology of designing and modeling a transmission DCG hologram using coupled wave analysis for different PV bandgap combinations is described. To achieve a broad diffraction bandwidth and sharp cut-off wavelength, a cascaded structure of multiple thick holograms is described. A search algorithm is also developed to optimize both single and two-layer cascaded holographic spectrum splitters for the best bandgap combinations of two- and three-junction SSPV systems illuminated under the AM1.5 solar spectrum. The power conversion efficiencies of the optimized systems under the AM1.5 solar spectrum are then calculated using the detailed balance method, and shows an improvement compared with tandem structure.
Optimization of an enhanced ceramic micro-filter for concentrating E.coli in water
NASA Astrophysics Data System (ADS)
Zhang, Yushan; Guo, Tianyi; Xu, Changqing; Hong, Lingcheng
2017-02-01
Recently lower limit of detection (LOD) is necessary for rapid bacteria detection and analysis applications in clinical practices and daily life. A critical pre-conditioning step for these applications is bacterial concentration, especially for low level of pathogens. Sample volume can be largely reduced with an efficient pre-concentration process. Some approaches such as hollow-fiber ultra-filtration and electrokinetic technique have been applied to bacterial concentration. Since none of these methods can provide a concentrating method with a stable recovery efficiency, bacterial concentration still remains challenging Ceramic micro- filter can be used to concentrate the bacteria but the cross flow system keeps the bacteria in suspension. Similar harvesting bacteria using ultra-filtration showed an average recovery efficiency of 43% [1] and other studies achieved recovery rates greater than 50% [2]. In this study, an enhanced ceramic micro-filter with 0.14 μm pore size was proposed and demonstrated to optimize the concentration of E.coli. A high recovery rate (mean value >90%) and a high volumetric concentration ratio (>100) were achieved. Known quantities (104 to 106 CFU/ml) of E.coli cells were spiked to different amounts of phosphate buffered saline (0.1 to 1 L), and then concentrated to a final retentate of 5 ml to 10 ml. An average recovery efficiency of 95.3% with a standard deviation of 5.6% was achieved when the volumetric con- centration ratio was 10. No significant recovery rate loss was indicated when the volumetric concentration ratio reached up to 100. The effects of multiple parameters on E.coli recovery rate were also studied. The obtained results indicated that the optimized ceramic micro- filtration system can successfully concentrate E.coli cells in water with an average recovery rate of 90.8%.
Deso, Steven E.; Idakoji, Ibrahim A.; Muelly, Michael C.; Kuo, William T.
2016-01-01
Owing to a myriad of inferior vena cava (IVC) filter types and their potential complications, rapid and correct identification may be challenging when encountered on routine imaging. The authors aimed to develop an interactive mobile application that allows recognition of all IVC filters and related complications, to optimize the care of patients with indwelling IVC filters. The FDA Premarket Notification Database was queried from 1980 to 2014 to identify all IVC filter types in the United States. An electronic search was then performed on MEDLINE and the FDA MAUDE database to identify all reported complications associated with each device. High-resolution photos were taken of each filter type and corresponding computed tomographic and fluoroscopic images were obtained from an institutional review board–approved IVC filter registry. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code. The software was deployed using PhoneGap (Adobe, San Jose, CA), and the prototype was tested and refined. Twenty-three IVC filter types were identified for inclusion. Safety data from FDA MAUDE and 72 relevant peer-reviewed studies were acquired, and complication rates for each filter type were highlighted in the application. Digital photos, fluoroscopic images, and CT DICOM files were seamlessly incorporated. All data were succinctly organized electronically, and the software was successfully deployed into Android (Google, Mountain View, CA) and iOS (Apple, Cupertino, CA) platforms. A powerful electronic mobile application was successfully created to allow rapid identification of all IVC filter types and related complications. This application may be used to optimize the care of patients with IVC filters. PMID:27247483
Deso, Steven E; Idakoji, Ibrahim A; Muelly, Michael C; Kuo, William T
2016-06-01
Owing to a myriad of inferior vena cava (IVC) filter types and their potential complications, rapid and correct identification may be challenging when encountered on routine imaging. The authors aimed to develop an interactive mobile application that allows recognition of all IVC filters and related complications, to optimize the care of patients with indwelling IVC filters. The FDA Premarket Notification Database was queried from 1980 to 2014 to identify all IVC filter types in the United States. An electronic search was then performed on MEDLINE and the FDA MAUDE database to identify all reported complications associated with each device. High-resolution photos were taken of each filter type and corresponding computed tomographic and fluoroscopic images were obtained from an institutional review board-approved IVC filter registry. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code. The software was deployed using PhoneGap (Adobe, San Jose, CA), and the prototype was tested and refined. Twenty-three IVC filter types were identified for inclusion. Safety data from FDA MAUDE and 72 relevant peer-reviewed studies were acquired, and complication rates for each filter type were highlighted in the application. Digital photos, fluoroscopic images, and CT DICOM files were seamlessly incorporated. All data were succinctly organized electronically, and the software was successfully deployed into Android (Google, Mountain View, CA) and iOS (Apple, Cupertino, CA) platforms. A powerful electronic mobile application was successfully created to allow rapid identification of all IVC filter types and related complications. This application may be used to optimize the care of patients with IVC filters.
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises
Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.
Jin, Qiyu; Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.
Very large scale characterization of graphene mechanical devices using a colorimetry technique.
Cartamil-Bueno, Santiago Jose; Centeno, Alba; Zurutuza, Amaia; Steeneken, Peter Gerard; van der Zant, Herre Sjoerd Jan; Houri, Samer
2017-06-08
We use a scalable optical technique to characterize more than 21 000 circular nanomechanical devices made of suspended single- and double-layer graphene on cavities with different diameters (D) and depths (g). To maximize the contrast between suspended and broken membranes we used a model for selecting the optimal color filter. The method enables parallel and automatized image processing for yield statistics. We find the survival probability to be correlated with a structural mechanics scaling parameter given by D 4 /g 3 . Moreover, we extract a median adhesion energy of Γ = 0.9 J m -2 between the membrane and the native SiO 2 at the bottom of the cavities.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Comparison of cryogenic low-pass filters.
Thalmann, M; Pernau, H-F; Strunk, C; Scheer, E; Pietsch, T
2017-11-01
Low-temperature electronic transport measurements with high energy resolution require both effective low-pass filtering of high-frequency input noise and an optimized thermalization of the electronic system of the experiment. In recent years, elaborate filter designs have been developed for cryogenic low-level measurements, driven by the growing interest in fundamental quantum-physical phenomena at energy scales corresponding to temperatures in the few millikelvin regime. However, a single filter concept is often insufficient to thermalize the electronic system to the cryogenic bath and eliminate spurious high frequency noise. Moreover, the available concepts often provide inadequate filtering to operate at temperatures below 10 mK, which are routinely available now in dilution cryogenic systems. Herein we provide a comprehensive analysis of commonly used filter types, introduce a novel compact filter type based on ferrite compounds optimized for the frequency range above 20 GHz, and develop an improved filtering scheme providing adaptable broad-band low-pass characteristic for cryogenic low-level and quantum measurement applications at temperatures down to few millikelvin.
Comparison of cryogenic low-pass filters
NASA Astrophysics Data System (ADS)
Thalmann, M.; Pernau, H.-F.; Strunk, C.; Scheer, E.; Pietsch, T.
2017-11-01
Low-temperature electronic transport measurements with high energy resolution require both effective low-pass filtering of high-frequency input noise and an optimized thermalization of the electronic system of the experiment. In recent years, elaborate filter designs have been developed for cryogenic low-level measurements, driven by the growing interest in fundamental quantum-physical phenomena at energy scales corresponding to temperatures in the few millikelvin regime. However, a single filter concept is often insufficient to thermalize the electronic system to the cryogenic bath and eliminate spurious high frequency noise. Moreover, the available concepts often provide inadequate filtering to operate at temperatures below 10 mK, which are routinely available now in dilution cryogenic systems. Herein we provide a comprehensive analysis of commonly used filter types, introduce a novel compact filter type based on ferrite compounds optimized for the frequency range above 20 GHz, and develop an improved filtering scheme providing adaptable broad-band low-pass characteristic for cryogenic low-level and quantum measurement applications at temperatures down to few millikelvin.
Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions
NASA Technical Reports Server (NTRS)
Cohn, S.; Isaacson, E.; Ghil, M.
1981-01-01
The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.
Ares-I Bending Filter Design using a Constrained Optimization Approach
NASA Technical Reports Server (NTRS)
Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth
2008-01-01
The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.
Optimal design of FIR triplet halfband filter bank and application in image coding.
Kha, H H; Tuan, H D; Nguyen, T Q
2011-02-01
This correspondence proposes an efficient semidefinite programming (SDP) method for the design of a class of linear phase finite impulse response triplet halfband filter banks whose filters have optimal frequency selectivity for a prescribed regularity order. The design problem is formulated as the minimization of the least square error subject to peak error constraints and regularity constraints. By using the linear matrix inequality characterization of the trigonometric semi-infinite constraints, it can then be exactly cast as a SDP problem with a small number of variables and, hence, can be solved efficiently. Several design examples of the triplet halfband filter bank are provided for illustration and comparison with previous works. Finally, the image coding performance of the filter bank is presented.
Optimal design of active EMC filters
NASA Astrophysics Data System (ADS)
Chand, B.; Kut, T.; Dickmann, S.
2013-07-01
A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.
Optimization of internet content filtering-Combined with KNN and OCAT algorithms
NASA Astrophysics Data System (ADS)
Guo, Tianze; Wu, Lingjing; Liu, Jiaming
2018-04-01
The face of the status quo that rampant illegal content in the Internet, the result of traditional way to filter information, keyword recognition and manual screening, is getting worse. Based on this, this paper uses OCAT algorithm nested by KNN classification algorithm to construct a corpus training library that can dynamically learn and update, which can be improved on the filter corpus for constantly updated illegal content of the network, including text and pictures, and thus can better filter and investigate illegal content and its source. After that, the research direction will focus on the simplified updating of recognition and comparison algorithms and the optimization of the corpus learning ability in order to improve the efficiency of filtering, save time and resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, E.L.; Calvert, J.M.; Koloski, T.
1997-02-01
We report on the results of a project using surface characterization and novel surface-modification techniques to address the issues of developing a minimally fouling ceramic membrane filter. We have studied the physical characteristics of a synthetic bilge water mixture, examined the surfaces of the ceramic filters for evidence of fouling, and identified several surface modifications that, under laboratory conditions, work well in prevention of foulants. These surfaces include hydrophobic as well as polar coatings. For the bilge water, it was discovered that detergent, at certain concentrations, may be useful in separating and coalescing oil droplets from the bilge water. Basedmore » on the results of the studies, several strategies for optimizing the removal of oil from water are suggested.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spataru, Sergiu; Hacke, Peter; Sera, Dezso
A method for detecting micro-cracks in solar cells using two dimensional matched filters was developed, derived from the electroluminescence intensity profile of typical micro-cracks. We describe the image processing steps to obtain a binary map with the location of the micro-cracks. Finally, we show how to automatically estimate the total length of each micro-crack from these maps, and propose a method to identify severe types of micro-cracks, such as parallel, dendritic, and cracks with multiple orientations. With an optimized threshold parameter, the technique detects over 90 % of cracks larger than 3 cm in length. The method shows great potentialmore » for quantifying micro-crack damage after manufacturing or module transportation for the determination of a module quality criterion for cell cracking in photovoltaic modules.« less
Prototype color field sequential television lens assembly
NASA Technical Reports Server (NTRS)
1974-01-01
The design, development, and evaluation of a prototype modular lens assembly with a self-contained field sequential color wheel is presented. The design of a color wheel of maximum efficiency, the selection of spectral filters, and the design of a quiet, efficient wheel drive system are included. Design tradeoffs considered for each aspect of the modular assembly are discussed. Emphasis is placed on achieving a design which can be attached directly to an unmodified camera, thus permitting use of the assembly in evaluating various candidate camera and sensor designs. A technique is described which permits maintaining high optical efficiency with an unmodified camera. A motor synchronization system is developed which requires only the vertical synchronization signal as a reference frequency input. Equations and tradeoff curves are developed to permit optimizing the filter wheel aperture shapes for a variety of different design conditions.
NASA Astrophysics Data System (ADS)
Dikmese, Sener; Srinivasan, Sudharsan; Shaat, Musbah; Bader, Faouzi; Renfors, Markku
2014-12-01
Multicarrier waveforms have been commonly recognized as strong candidates for cognitive radio. In this paper, we study the dynamics of spectrum sensing and spectrum allocation functions in cognitive radio context using very practical signal models for the primary users (PUs), including the effects of power amplifier nonlinearities. We start by sensing the spectrum with energy detection-based wideband multichannel spectrum sensing algorithm and continue by investigating optimal resource allocation methods. Along the way, we examine the effects of spectral regrowth due to the inevitable power amplifier nonlinearities of the PU transmitters. The signal model includes frequency selective block-fading channel models for both secondary and primary transmissions. Filter bank-based wideband spectrum sensing techniques are applied for detecting spectral holes and filter bank-based multicarrier (FBMC) modulation is selected for transmission as an alternative multicarrier waveform to avoid the disadvantage of limited spectral containment of orthogonal frequency-division multiplexing (OFDM)-based multicarrier systems. The optimization technique used for the resource allocation approach considered in this study utilizes the information obtained through spectrum sensing and knowledge of spectrum leakage effects of the underlying waveforms, including a practical power amplifier model for the PU transmitter. This study utilizes a computationally efficient algorithm to maximize the SU link capacity with power and interference constraints. It is seen that the SU transmission capacity depends critically on the spectral containment of the PU waveform, and these effects are quantified in a case study using an 802.11-g WLAN scenario.
NASA Astrophysics Data System (ADS)
Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao
2017-02-01
Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.
NASA Astrophysics Data System (ADS)
Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang
2017-10-01
The purpose of the present work is to study the buckling problem with plate/shell topology optimization of orthotropic material. A model of buckling topology optimization is established based on the independent, continuous, and mapping method, which considers structural mass as objective and buckling critical loads as constraints. Firstly, composite exponential function (CEF) and power function (PF) as filter functions are introduced to recognize the element mass, the element stiffness matrix, and the element geometric stiffness matrix. The filter functions of the orthotropic material stiffness are deduced. Then these filter functions are put into buckling topology optimization of a differential equation to analyze the design sensitivity. Furthermore, the buckling constraints are approximately expressed as explicit functions with respect to the design variables based on the first-order Taylor expansion. The objective function is standardized based on the second-order Taylor expansion. Therefore, the optimization model is translated into a quadratic program. Finally, the dual sequence quadratic programming (DSQP) algorithm and the global convergence method of moving asymptotes algorithm with two different filter functions (CEF and PF) are applied to solve the optimal model. Three numerical results show that DSQP&CEF has the best performance in the view of structural mass and discretion.
Quantum-behaved particle swarm optimization for the synthesis of fibre Bragg gratings filter
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Sun, Yunxu; Yao, Yong; Tian, Jiajun; Cong, Shan
2011-12-01
A method based on the quantum-behaved particle swarm optimization algorithm is presented to design a bandpass filter of the fibre Bragg gratings. In contrast to the other optimization algorithms such as the genetic algorithm and particle swarm optimization algorithm, this method is simpler and easier to implement. To demonstrate the effectiveness of the QPSO algorithm, we consider a bandpass filter. With the parameters the half the bandwidth of the filter 0.05 nm, the Bragg wavelength 1550 nm, the grating length with 2cm is divided into 40 uniform sections and its index modulation is what should be optimized and whole feasible solution space is searched for the index modulation. After the index modulation profile is known for all the sections, the transfer matrix method is used to verify the final optimal index modulation by calculating the refection spectrum. The results show the group delay is less than 12ps in band and the calculated dispersion is relatively flat inside the passband. It is further found that the reflective spectrum has sidelobes around -30dB and the worst in-band dispersion value is less than 200ps/nm . In addition, for this design, it takes approximately several minutes to find the acceptable index modulation values with a notebook computer.
On the Performance of the Martin Digital Filter for High- and Low-pass Applications
NASA Technical Reports Server (NTRS)
Mcclain, C. R.
1979-01-01
A nonrecursive numerical filter is described in which the weighting sequence is optimized by minimizing the excursion from the ideal rectangular filter in a least squares sense over the entire domain of normalized frequency. Additional corrections to the weights in order to reduce overshoot oscillations (Gibbs phenomenon) and to insure unity gain at zero frequency for the low pass filter are incorporated. The filter is characterized by a zero phase shift for all frequencies (due to a symmetric weighting sequence), a finite memory and stability, and it may readily be transformed to a high pass filter. Equations for the filter weights and the frequency response function are presented, and applications to high and low pass filtering are examined. A discussion of optimization of high pass filter parameters for a rather stringent response requirement is given in an application to the removal of aircraft low frequency oscillations superimposed on remotely sensed ocean surface profiles. Several frequency response functions are displayed, both in normalized frequency space and in period space. A comparison of the performance of the Martin filter with some other commonly used low pass digital filters is provided in an application to oceanographic data.
Vila, Marlene; Llompart, Maria; Garcia-Jares, Carmen; Homem, Vera; Dagnac, Thierry
2018-06-06
A methodology based on solid-phase microextraction (SPME) followed by gas chromatography-tandem mass spectrometry (GC-MS/MS) has been developed for the simultaneous analysis of eleven multiclass ultraviolet (UV) filters in beach sand. To the best of our knowledge, this is the first time that this extraction technique is applied to the analysis of UV filters in sand samples, and in other kind of environmental solid samples. Main extraction parameters such as the fibre coating, the amount of sample, the addition of salt, the volume of water added to the sand, and the temperature were optimized. An experimental design approach was implemented in order to find out the most favourable conditions. The final conditions consisted of adding 1 mL of water to 1 g of sample followed by the headspace SPME for 20 min at 100 °C, using PDMS/DVB as fibre coating. The SPME-GC-MS/MS method was validated in terms of linearity, accuracy, limits of detection and quantification, and precision. Recovery studies were also performed at three concentration levels in real Atlantic and Mediterranean sand samples. The recoveries were generally above 85% and relative standard deviations below 11%. The limits of detection were in the pg g -1 level. The validated methodology was successfully applied to the analysis of real sand samples collected from Atlantic Ocean beaches in the Northwest coast of Spain and Portugal, Canary Islands (Spain), and from Mediterranean Sea beaches in Mallorca Island (Spain). The most frequently found UV filters were ethylhexyl salicylate (EHS), homosalate (HMS), 4-methylbenzylidene camphor (4MBC), 2-ethylhexyl methoxycinnamate (2EHMC) and octocrylene (OCR), with concentrations up to 670 ng g -1 . Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young
2016-02-15
Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuirmore » probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H{sup −} populations for various filter field strengths and pressures. Enhanced H{sup −} population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H{sup −} sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region.« less
Ultrasonic tracking of shear waves using a particle filter.
Ingle, Atul N; Ma, Chi; Varghese, Tomy
2015-11-01
This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques.
Di, Huige; Zhang, Zhanfei; Hua, Hangbo; Zhang, Jiaqi; Hua, Dengxin; Wang, Yufeng; He, Tingyao
2017-03-06
Accurate aerosol optical properties could be obtained via the high spectral resolution lidar (HSRL) technique, which employs a narrow spectral filter to suppress the Rayleigh or Mie scattering in lidar return signals. The ability of the filter to suppress Rayleigh or Mie scattering is critical for HSRL. Meanwhile, it is impossible to increase the rejection of the filter without limitation. How to optimize the spectral discriminator and select the appropriate suppression rate of the signal is important to us. The HSRL technology was thoroughly studied based on error propagation. Error analyses and sensitivity studies were carried out on the transmittance characteristics of the spectral discriminator. Moreover, ratwo different spectroscopic methods for HSRL were described and compared: one is to suppress the Mie scattering; the other is to suppress the Rayleigh scattering. The corresponding HSRLs were simulated and analyzed. The results show that excessive suppression of Rayleigh scattering or Mie scattering in a high-spectral channel is not necessary if the transmittance of the spectral filter for molecular and aerosol scattering signals can be well characterized. When the ratio of transmittance of the spectral filter for aerosol scattering and molecular scattering is less than 0.1 or greater than 10, the detection error does not change much with its value. This conclusion implies that we have more choices for the high-spectral discriminator in HSRL. Moreover, the detection errors of HSRL regarding the two spectroscopic methods vary greatly with the atmospheric backscattering ratio. To reduce the detection error, it is necessary to choose a reasonable spectroscopic method. The detection method of suppressing the Rayleigh signal and extracting the Mie signal can achieve less error in a clear atmosphere, while the method of suppressing the Mie signal and extracting the Rayleigh signal can achieve less error in a polluted atmosphere.
Design Optimization of Vena Cava Filters: An application to dual filtration devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, M A; Wang, S L; Diachin, D P
Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped modelmore » thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.« less
Regenerative particulate filter development
NASA Technical Reports Server (NTRS)
Descamp, V. A.; Boex, M. W.; Hussey, M. W.; Larson, T. P.
1972-01-01
Development, design, and fabrication of a prototype filter regeneration unit for regenerating clean fluid particle filter elements by using a backflush/jet impingement technique are reported. Development tests were also conducted on a vortex particle separator designed for use in zero gravity environment. A maintainable filter was designed, fabricated and tested that allows filter element replacement without any leakage or spillage of system fluid. Also described are spacecraft fluid system design and filter maintenance techniques with respect to inflight maintenance for the space shuttle and space station.
Deep neural networks to enable real-time multimessenger astrophysics
NASA Astrophysics Data System (ADS)
George, Daniel; Huerta, E. A.
2018-02-01
Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.
Correia, Carlos M; Teixeira, Joel
2014-12-01
Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.
Quasi-Optical Filter Development and Characterization for Far-IR Astronomical Applications
NASA Astrophysics Data System (ADS)
Stewart, Kenneth
Mid-infrared through microwave filters, beamsplitters, and polarizers are a crucial supporting technology for NASA’s space astronomy, astrophysics, and earth science programs. Building upon our successful production of mid-infrared, far-infrared, millimeter, and microwave bandpass and lowpass filters, we propose to investigate aspects of their optical performance that are still not well understood and have yet to be addressed by other researchers. Specifically, we wish to understand and mitigate unexplained high-frequency leaks found to degrade or invalidate spectroscopic data from flight instruments such as Herschel/PACS, SHARC II, GISMO, and ACT, but not predicted by numerical simulations. A complete understanding will improve accuracy and sensitivity, and will enable the mass and volume of cryogenic baffling to be appropriately matched to the physically achievable quasioptical filter response, thereby reducing the cost of future far-infrared missions. The development and experimental validation of this modeling capability will enable optimization of system performance as well as reduce risks to the schedule and end science products for all future space and suborbital missions that use quasioptical filters. The outcome of this work will be critical in achieving the exacting background-limited bolometric detector performance specifications of future far-infrared and submillimeter space instruments. This program will allow us to apply our unique in-house numerical simulation software and develop enhanced layer alignment, filter fabrication, and testing techniques for the first time to address these issues: (1) enhance filter performance, (2) simplify the optical architecture of future instruments by improving our understanding of high-frequency leaks, and (3) produce filters which minimize or eliminate these important effects. With our state-ofthe-art modeling, fabrication, and testing facilities and expertise, established in previous projects, we are uniquely positioned to tackle this development.
New spectral imaging techniques for blood oximetry in the retina
NASA Astrophysics Data System (ADS)
Alabboud, Ied; Muyo, Gonzalo; Gorman, Alistair; Mordant, David; McNaught, Andrew; Petres, Clement; Petillot, Yvan R.; Harvey, Andrew R.
2007-07-01
Hyperspectral imaging of the retina presents a unique opportunity for direct and quantitative mapping of retinal biochemistry - particularly of the vasculature where blood oximetry is enabled by the strong variation of absorption spectra with oxygenation. This is particularly pertinent both to research and to clinical investigation and diagnosis of retinal diseases such as diabetes, glaucoma and age-related macular degeneration. The optimal exploitation of hyperspectral imaging however, presents a set of challenging problems, including; the poorly characterised and controlled optical environment of structures within the retina to be imaged; the erratic motion of the eye ball; and the compounding effects of the optical sensitivity of the retina and the low numerical aperture of the eye. We have developed two spectral imaging techniques to address these issues. We describe first a system in which a liquid crystal tuneable filter is integrated into the illumination system of a conventional fundus camera to enable time-sequential, random access recording of narrow-band spectral images. Image processing techniques are described to eradicate the artefacts that may be introduced by time-sequential imaging. In addition we describe a unique snapshot spectral imaging technique dubbed IRIS that employs polarising interferometry and Wollaston prism beam splitters to simultaneously replicate and spectrally filter images of the retina into multiple spectral bands onto a single detector array. Results of early clinical trials acquired with these two techniques together with a physical model which enables oximetry map are reported.
Blondeel, Evelyne; Depuydt, Veerle; Cornelis, Jasper; Chys, Michael; Verliefde, Arne; Van Hulle, Stijin Wim Henk
2015-01-01
Pilot-scale optimisation of different possible physical-chemical water treatment techniques was performed on the wastewater originating from three different recovery and recycling companies in order to select a (combination of) technique(s) for further full-scale implementation. This implementation is necessary to reduce the concentration of both common pollutants (such as COD, nutrients and suspended solids) and potentially toxic metals, polyaromatic hydrocarbons and poly-chlorinated biphenyls frequently below the discharge limits. The pilot-scale tests (at 250 L h(-1) scale) demonstrate that sand anthracite filtration or coagulation/flocculation are interesting as first treatment techniques with removal efficiencies of about 19% to 66% (sand anthracite filtration), respectively 18% to 60% (coagulation/flocculation) for the above mentioned pollutants (metals, polyaromatic hydrocarbons and poly chlorinated biphenyls). If a second treatment step is required, the implementation of an activated carbon filter is recommended (about 46% to 86% additional removal is obtained).
Design of almost symmetric orthogonal wavelet filter bank via direct optimization.
Murugesan, Selvaraaju; Tay, David B H
2012-05-01
It is a well-known fact that (compact-support) dyadic wavelets [based on the two channel filter banks (FBs)] cannot be simultaneously orthogonal and symmetric. Although orthogonal wavelets have the energy preservation property, biorthogonal wavelets are preferred in image processing applications because of their symmetric property. In this paper, a novel method is presented for the design of almost symmetric orthogonal wavelet FB. Orthogonality is structurally imposed by using the unnormalized lattice structure, and this leads to an objective function, which is relatively simple to optimize. The designed filters have good frequency response, flat group delay, almost symmetric filter coefficients, and symmetric wavelet function.
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
Proceedings of the Conference on Moments and Signal
NASA Astrophysics Data System (ADS)
Purdue, P.; Solomon, H.
1992-09-01
The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.
Real-time colouring and filtering with graphics shaders
NASA Astrophysics Data System (ADS)
Vohl, D.; Fluke, C. J.; Barnes, D. G.; Hassan, A. H.
2017-11-01
Despite the popularity of the Graphics Processing Unit (GPU) for general purpose computing, one should not forget about the practicality of the GPU for fast scientific visualization. As astronomers have increasing access to three-dimensional (3D) data from instruments and facilities like integral field units and radio interferometers, visualization techniques such as volume rendering offer means to quickly explore spectral cubes as a whole. As most 3D visualization techniques have been developed in fields of research like medical imaging and fluid dynamics, many transfer functions are not optimal for astronomical data. We demonstrate how transfer functions and graphics shaders can be exploited to provide new astronomy-specific explorative colouring methods. We present 12 shaders, including four novel transfer functions specifically designed to produce intuitive and informative 3D visualizations of spectral cube data. We compare their utility to classic colour mapping. The remaining shaders highlight how common computation like filtering, smoothing and line ratio algorithms can be integrated as part of the graphics pipeline. We discuss how this can be achieved by utilizing the parallelism of modern GPUs along with a shading language, letting astronomers apply these new techniques at interactive frame rates. All shaders investigated in this work are included in the open source software shwirl (Vohl 2017).
Implementation issues of the nearfield equivalent source imaging microphone array
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen
2011-01-01
This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.
Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T
2016-06-01
Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.
Improvement of the GERDA Ge Detectors Energy Resolution by an Optimized Digital Signal Processing
NASA Astrophysics Data System (ADS)
Benato, G.; D'Andrea, V.; Cattadori, C.; Riboldi, S.
GERDA is a new generation experiment searching for neutrinoless double beta decay of 76Ge, operating at INFN Gran Sasso Laboratories (LNGS) since 2010. Coaxial and Broad Energy Germanium (BEGe) Detectors have been operated in liquid argon (LAr) in GERDA Phase I. In the framework of the second GERDA experimental phase, both the contacting technique, the connection to and the location of the front end readout devices are novel compared to those previously adopted, and several tests have been performed. In this work, starting from considerations on the energy scale stability of the GERDA Phase I calibrations and physics data sets, an optimized pulse filtering method has been developed and applied to the Phase II pilot tests data sets, and to few GERDA Phase I data sets. In this contribution the detector performances in term of energy resolution and time stability are here presented. The improvement of the energy resolution, compared to standard Gaussian shaping adopted for Phase I data analysis, is discussed and related to the optimized noise filtering capability. The result is an energy resolution better than 0.1% at 2.6 MeV for the BEGe detectors operated in the Phase II pilot tests and an improvement of the energy resolution in LAr of about 8% achieved on the GERDA Phase I calibration runs, compared to previous analysis algorithms.
NASA Astrophysics Data System (ADS)
Chen, Biao; Jing, Zhenxue; Smith, Andrew
2005-04-01
Contrast enhanced digital mammography (CEDM), which is based upon the analysis of a series of x-ray projection images acquired before/after the administration of contrast agents, may provide physicians critical physiologic and morphologic information of breast lesions to determine the malignancy of lesions. This paper proposes to combine the kinetic analysis (KA) of contrast agent uptake/washout process and the dual-energy (DE) contrast enhancement together to formulate a hybrid contrast enhanced breast-imaging framework. The quantitative characteristics of materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filter, breast tissues/lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systematically modeled. The contrast-noise-ration (CNR) of iodinated lesions and mean absorbed glandular dose were estimated mathematically. The x-ray techniques optimization was conducted through a series of computer simulations to find the optimal tube voltage, filter thickness, and exposure levels for various breast thicknesses, breast density, and detectable contrast agent concentration levels in terms of detection efficiency (CNR2/dose). A phantom study was performed on a modified Selenia full field digital mammography system to verify the simulated results. The dose level was comparable to the dose in diagnostic mode (less than 4 mGy for an average 4.2 cm compressed breast). The results from the computer simulations and phantom study are being used to optimize an ongoing clinical study.
Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-02-22
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.
Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-01-01
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406
Vibrato in Singing Voice: The Link between Source-Filter and Sinusoidal Models
NASA Astrophysics Data System (ADS)
Arroabarren, Ixone; Carlosena, Alfonso
2004-12-01
The application of inverse filtering techniques for high-quality singing voice analysis/synthesis is discussed. In the context of source-filter models, inverse filtering provides a noninvasive method to extract the voice source, and thus to study voice quality. Although this approach is widely used in speech synthesis, this is not the case in singing voice. Several studies have proved that inverse filtering techniques fail in the case of singing voice, the reasons being unclear. In order to shed light on this problem, we will consider here an additional feature of singing voice, not present in speech: the vibrato. Vibrato has been traditionally studied by sinusoidal modeling. As an alternative, we will introduce here a novel noninteractive source filter model that incorporates the mechanisms of vibrato generation. This model will also allow the comparison of the results produced by inverse filtering techniques and by sinusoidal modeling, as they apply to singing voice and not to speech. In this way, the limitations of these conventional techniques, described in previous literature, will be explained. Both synthetic signals and singer recordings are used to validate and compare the techniques presented in the paper.
Quantitative filter forensics for indoor particle sampling.
Haaland, D; Siegel, J A
2017-03-01
Filter forensics is a promising indoor air investigation technique involving the analysis of dust which has collected on filters in central forced-air heating, ventilation, and air conditioning (HVAC) or portable systems to determine the presence of indoor particle-bound contaminants. In this study, we summarize past filter forensics research to explore what it reveals about the sampling technique and the indoor environment. There are 60 investigations in the literature that have used this sampling technique for a variety of biotic and abiotic contaminants. Many studies identified differences between contaminant concentrations in different buildings using this technique. Based on this literature review, we identified a lack of quantification as a gap in the past literature. Accordingly, we propose an approach to quantitatively link contaminants extracted from HVAC filter dust to time-averaged integrated air concentrations. This quantitative filter forensics approach has great potential to measure indoor air concentrations of a wide variety of particle-bound contaminants. Future studies directly comparing quantitative filter forensics to alternative sampling techniques are required to fully assess this approach, but analysis of past research suggests the enormous possibility of this approach. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Classifying EEG for Brain-Computer Interface: Learning Optimal Filters for Dynamical System Features
Song, Le; Epps, Julien
2007-01-01
Classification of multichannel EEG recordings during motor imagination has been exploited successfully for brain-computer interfaces (BCI). In this paper, we consider EEG signals as the outputs of a networked dynamical system (the cortex), and exploit synchronization features from the dynamical system for classification. Herein, we also propose a new framework for learning optimal filters automatically from the data, by employing a Fisher ratio criterion. Experimental evaluations comparing the proposed dynamical system features with the CSP and the AR features reveal their competitive performance during classification. Results also show the benefits of employing the spatial and the temporal filters optimized using the proposed learning approach. PMID:18364986
Guenter Tulip Filter Retrieval Experience: Predictors of Successful Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turba, Ulku Cenk, E-mail: uct5d@virginia.edu; Arslan, Bulent, E-mail: ba6e@virginia.edu; Meuse, Michael, E-mail: mm5tz@virginia.edu
We report our experience with Guenter Tulip filter placement indications, retrievals, and procedural problems, with emphasis on alternative retrieval techniques. We have identified 92 consecutive patients in whom a Guenter Tulip filter was placed and filter removal attempted. We recorded patient demographic information, filter placement and retrieval indications, procedures, standard and nonstandard filter retrieval techniques, complications, and clinical outcomes. The mean time to retrieval for those who experienced filter strut penetration was statistically significant [F(1,90) = 8.55, p = 0.004]. Filter strut(s) IVC penetration and successful retrieval were found to be statistically significant (p = 0.043). The filter hook-IVC relationshipmore » correlated with successful retrieval. A modified guidewire loop technique was applied in 8 of 10 cases where the hook appeared to penetrate the IVC wall and could not be engaged with a loop snare catheter, providing additional technical success in 6 of 8 (75%). Therefore, the total filter retrieval success increased from 88 to 95%. In conclusion, the Guenter Tulip filter has high successful retrieval rates with low rates of complication. Additional maneuvers such as a guidewire loop method can be used to improve retrieval success rates when the filter hook is endothelialized.« less
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk
2010-01-01
It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than −40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of −20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe ‘ripples’ when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm × 20.0 mm dimensions could increase the temperature of the soft biological tissue from 55 °C to 71 °C within 60 s. Two types of experiments for simultaneous therapy and imaging were conducted to acquire a single scan-line and B-mode image with an aluminum plate and a slice of porcine muscle, respectively. The B-mode image was obtained using the single element imaging system during HIFU beam transmission. The experimental results proved that the combination of the traditional short-pulse excitation and the adaptive noise canceling method could significantly reduce therapeutic interference and remnant ripples and thus may be a better way to implement real-time simultaneous therapy and imaging. PMID:20224162
Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk
2010-04-07
It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than -40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of -20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe 'ripples' when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm x 20.0 mm dimensions could increase the temperature of the soft biological tissue from 55 degrees C to 71 degrees C within 60 s. Two types of experiments for simultaneous therapy and imaging were conducted to acquire a single scan-line and B-mode image with an aluminum plate and a slice of porcine muscle, respectively. The B-mode image was obtained using the single element imaging system during HIFU beam transmission. The experimental results proved that the combination of the traditional short-pulse excitation and the adaptive noise canceling method could significantly reduce therapeutic interference and remnant ripples and thus may be a better way to implement real-time simultaneous therapy and imaging.
Delgado Reyes, Lourdes M; Bohache, Kevin; Wijeakumar, Sobanawartiny; Spencer, John P
2018-04-01
Motion artifacts are often a significant component of the measured signal in functional near-infrared spectroscopy (fNIRS) experiments. A variety of methods have been proposed to address this issue, including principal components analysis (PCA), correlation-based signal improvement (CBSI), wavelet filtering, and spline interpolation. The efficacy of these techniques has been compared using simulated data; however, our understanding of how these techniques fare when dealing with task-based cognitive data is limited. Brigadoi et al. compared motion correction techniques in a sample of adult data measured during a simple cognitive task. Wavelet filtering showed the most promise as an optimal technique for motion correction. Given that fNIRS is often used with infants and young children, it is critical to evaluate the effectiveness of motion correction techniques directly with data from these age groups. This study addresses that problem by evaluating motion correction algorithms implemented in HomER2. The efficacy of each technique was compared quantitatively using objective metrics related to the physiological properties of the hemodynamic response. Results showed that targeted PCA (tPCA), spline, and CBSI retained a higher number of trials. These techniques also performed well in direct head-to-head comparisons with the other approaches using quantitative metrics. The CBSI method corrected many of the artifacts present in our data; however, this approach produced sometimes unstable HRFs. The targeted PCA and spline methods proved to be the most robust, performing well across all comparison metrics. When compared head to head, tPCA consistently outperformed spline. We conclude, therefore, that tPCA is an effective technique for correcting motion artifacts in fNIRS data from young children.
Rapid enumeration of viable bacteria by image analysis
NASA Technical Reports Server (NTRS)
Singh, A.; Pyle, B. H.; McFeters, G. A.
1989-01-01
A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
Saito, Masatoshi
2010-08-01
This article describes the spectral optimization of dual-energy computed tomography using balanced filters (bf-DECT) to reduce the tube loadings and dose by dedicating to the acquisition of electron density information, which is essential for treatment planning in radiotherapy. For the spectral optimization of bf-DECT, the author calculated the beam-hardening error and air kerma required to achieve a desired noise level in an electron density image of a 50-cm-diameter cylindrical water phantom. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimal combination of tube voltages was 80 kV/140 kV in conjunction with Tb/Hf and Bi/Mo filter pairs; this combination agrees with that obtained in a previous study [M. Saito, "Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method," Med. Phys. 36, 3631-3642 (2009)], although the thicknesses of the filters that yielded a minimum tube output were slightly different from those obtained in the previous study. The resultant tube loading of a low-energy scan of the present bf-DECT significantly decreased from 57.5 to 4.5 times that of a high-energy scan for conventional DECT. Furthermore, the air kerma of bf-DECT could be reduced to less than that of conventional DECT, while obtaining the same figure of merit for the measurement of electron density and effective atomic number. The tube-loading and dose efficiencies of bf-DECT were considerably improved by sacrificing the quality of the noise level in the images of effective atomic number.
Design and experimentally measure a high performance metamaterial filter
NASA Astrophysics Data System (ADS)
Xu, Ya-wen; Xu, Jing-cheng
2018-03-01
Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.
Wavelet Transform Based Filter to Remove the Notches from Signal Under Harmonic Polluted Environment
NASA Astrophysics Data System (ADS)
Das, Sukanta; Ranjan, Vikash
2017-12-01
The work proposes to annihilate the notches present in the synchronizing signal required for converter operation appearing due to switching of semiconductor devices connected to the system in the harmonic polluted environment. The disturbances in the signal are suppressed by wavelet based novel filtering technique. In the proposed technique, the notches in the signal are determined and eliminated by the wavelet based multi-rate filter using `Daubechies4' (db4) as mother wavelet. The computational complexity of the adapted technique is very less as compared to any other conventional notch filtering techniques. The proposed technique is developed in MATLAB/Simulink and finally validated with dSPACE-1103 interface. The recovered signal, thus obtained, is almost free of the notches.
NASA Astrophysics Data System (ADS)
Zhang, Shupeng; Yi, Xue; Zheng, Xiaogu; Chen, Zhuoqi; Dan, Bo; Zhang, Xuanze
2014-11-01
In this paper, a global carbon assimilation system (GCAS) is developed for optimizing the global land surface carbon flux at 1° resolution using multiple ecosystem models. In GCAS, three ecosystem models, Boreal Ecosystem Productivity Simulator, Carnegie-Ames-Stanford Approach, and Community Atmosphere Biosphere Land Exchange, produce the prior fluxes, and an atmospheric transport model, Model for OZone And Related chemical Tracers, is used to calculate atmospheric CO2 concentrations resulting from these prior fluxes. A local ensemble Kalman filter is developed to assimilate atmospheric CO2 data observed at 92 stations to optimize the carbon flux for six land regions, and the Bayesian model averaging method is implemented in GCAS to calculate the weighted average of the optimized fluxes based on individual ecosystem models. The weights for the models are found according to the closeness of their forecasted CO2 concentration to observation. Results of this study show that the model weights vary in time and space, allowing for an optimum utilization of different strengths of different ecosystem models. It is also demonstrated that spatial localization is an effective technique to avoid spurious optimization results for regions that are not well constrained by the atmospheric data. Based on the multimodel optimized flux from GCAS, we found that the average global terrestrial carbon sink over the 2002-2008 period is 2.97 ± 1.1 PgC yr-1, and the sinks are 0.88 ± 0.52, 0.27 ± 0.33, 0.67 ± 0.39, 0.90 ± 0.68, 0.21 ± 0.31, and 0.04 ± 0.08 PgC yr-1 for the North America, South America, Africa, Eurasia, Tropical Asia, and Australia, respectively. This multimodel GCAS can be used to improve global carbon cycle estimation.
Cruz-Monteagudo, Maykel; Borges, Fernanda; Cordeiro, M Natália D S; Cagide Fajin, J Luis; Morell, Carlos; Ruiz, Reinaldo Molina; Cañizares-Carmenate, Yudith; Dominguez, Elena Rosa
2008-01-01
Up to now, very few applications of multiobjective optimization (MOOP) techniques to quantitative structure-activity relationship (QSAR) studies have been reported in the literature. However, none of them report the optimization of objectives related directly to the final pharmaceutical profile of a drug. In this paper, a MOOP method based on Derringer's desirability function that allows conducting global QSAR studies, simultaneously considering the potency, bioavailability, and safety of a set of drug candidates, is introduced. The results of the desirability-based MOOP (the levels of the predictor variables concurrently producing the best possible compromise between the properties determining an optimal drug candidate) are used for the implementation of a ranking method that is also based on the application of desirability functions. This method allows ranking drug candidates with unknown pharmaceutical properties from combinatorial libraries according to the degree of similarity with the previously determined optimal candidate. Application of this method will make it possible to filter the most promising drug candidates of a library (the best-ranked candidates), which should have the best pharmaceutical profile (the best compromise between potency, safety and bioavailability). In addition, a validation method of the ranking process, as well as a quantitative measure of the quality of a ranking, the ranking quality index (Psi), is proposed. The usefulness of the desirability-based methods of MOOP and ranking is demonstrated by its application to a library of 95 fluoroquinolones, reporting their gram-negative antibacterial activity and mammalian cell cytotoxicity. Finally, the combined use of the desirability-based methods of MOOP and ranking proposed here seems to be a valuable tool for rational drug discovery and development.
Iterative metal artifact reduction: evaluation and optimization of technique.
Subhas, Naveen; Primak, Andrew N; Obuchowski, Nancy A; Gupta, Amit; Polster, Joshua M; Krauss, Andreas; Iannotti, Joseph P
2014-12-01
Iterative metal artifact reduction (IMAR) is a sinogram inpainting technique that incorporates high-frequency data from standard weighted filtered back projection (WFBP) reconstructions to reduce metal artifact on computed tomography (CT). This study was designed to compare the image quality of IMAR and WFBP in total shoulder arthroplasties (TSA); determine the optimal amount of WFBP high-frequency data needed for IMAR; and compare image quality of the standard 3D technique with that of a faster 2D technique. Eight patients with nine TSA underwent CT with standardized parameters: 140 kVp, 300 mAs, 0.6 mm collimation and slice thickness, and B30 kernel. WFBP, three 3D IMAR algorithms with different amounts of WFBP high-frequency data (IMARlo, lowest; IMARmod, moderate; IMARhi, highest), and one 2D IMAR algorithm were reconstructed. Differences in attenuation near hardware and away from hardware were measured and compared using repeated measures ANOVA. Five readers independently graded image quality; scores were compared using Friedman's test. Attenuation differences were smaller with all 3D IMAR techniques than with WFBP (p < 0.0063). With increasing high-frequency data, the attenuation difference increased slightly (differences not statistically significant). All readers ranked IMARmod and IMARhi more favorably than WFBP (p < 0.05), with IMARmod ranked highest for most structures. The attenuation difference was slightly higher with 2D than with 3D IMAR, with no significant reader preference for 3D over 2D. IMAR significantly decreases metal artifact compared to WFBP both objectively and subjectively in TSA. The incorporation of a moderate amount of WFBP high-frequency data and use of a 2D reconstruction technique optimize image quality and allow for relatively short reconstruction times.
Mazzà, Claudia; Donati, Marco; McCamley, John; Picerno, Pietro; Cappozzo, Aurelio
2012-01-01
The aim of this study was the fine tuning of a Kalman filter with the intent to provide optimal estimates of lower trunk orientation in the frontal and sagittal planes during treadmill walking at different speeds using measured linear acceleration and angular velocity components represented in a local system of reference. Data were simultaneously collected using both an inertial measurement unit (IMU) and a stereophotogrammetric system from three healthy subjects walking on a treadmill at natural, slow and fast speeds. These data were used to estimate the parameters of the Kalman filter that minimized the difference between the trunk orientations provided by the filter and those obtained through stereophotogrammetry. The optimized parameters were then used to process the data collected from a further 15 healthy subjects of both genders and different anthropometry performing the same walking tasks with the aim of determining the robustness of the filter set up. The filter proved to be very robust. The root mean square values of the differences between the angles estimated through the IMU and through stereophotogrammetry were lower than 1.0° and the correlation coefficients between the corresponding curves were greater than 0.91. The proposed filter design can be used to reliably estimate trunk lateral and frontal bending during walking from inertial sensor data. Further studies are needed to determine the filter parameters that are most suitable for other motor tasks. Copyright © 2011. Published by Elsevier B.V.
A Comparison of Retrievability: Celect versus Option Filter.
Ryu, Robert K; Desai, Kush; Karp, Jennifer; Gupta, Ramona; Evans, Alan Emerson; Rajeswaran, Shankar; Salem, Riad; Lewandowski, Robert J
2015-06-01
To compare the retrievability of 2 potentially retrievable inferior vena cava filter devices. A retrospective, institutional review board-approved study of Celect (Cook, Inc, Bloomington, Indiana) and Option (Rex Medical, Conshohocken, Pennsylvania) filters was conducted over a 33-month period at a single institution. Fluoroscopy time, significant filter tilt, use of adjunctive retrieval technique, and strut perforation in the inferior vena cava were recorded on retrieval. Fisher exact test and Mann-Whitney-Wilcoxon test were used for comparison. There were 99 Celect and 86 Option filters deployed. After an average of 2.09 months (range, 0.3-7.6 mo) and 1.94 months (range, 0.47-9.13 mo), respectively, 59% (n = 58) of patients with Celect filters and 74.7% (n = 65) of patients with Option filters presented for filter retrieval. Retrieval failure rates were 3.4% for Celect filters versus 7.7% for Option filters (P = .45). Median fluoroscopy retrieval times were 4.25 minutes for Celect filters versus 6 minutes for Option filters (P = .006). Adjunctive retrieval techniques were used in 5.4% of Celect filter retrievals versus 18.3% of Option filter retrievals (P = .045). The incidence of significant tilting was 8.9% for Celect filters versus 16.7% for Option filters (P = .27). The incidence of strut perforation was 43% for Celect filters versus 0% for Option filters (P < .0001). Retrieval rates for the Celect and Option filters were not significantly different. However, retrieval of the Option filter required a significantly increased amount of fluoroscopy time compared with the Celect filter, and there was a significantly greater usage of adjunctive retrieval techniques for the Option filter. The Celect filter had a significantly higher rate of strut perforation. Copyright © 2015 SIR. Published by Elsevier Inc. All rights reserved.
Recursive Implementations of the Consider Filter
NASA Technical Reports Server (NTRS)
Zanetti, Renato; DSouza, Chris
2012-01-01
One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.
Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.
Stavropoulos, S William; Ge, Benjamin H; Mondschein, Jeffrey I; Shlansky-Goldberg, Richard D; Sudheendra, Deepak; Trerotola, Scott O
2015-06-01
To evaluate the use of endobronchial forceps to retrieve tip-embedded inferior vena cava (IVC) filters. This institutional review board-approved, HIPAA-compliant retrospective study included 114 patients who presented with tip-embedded IVC filters for removal from January 2005 to April 2014. The included patients consisted of 77 women and 37 men with a mean age of 43 years (range, 18-79 years). Filters were identified as tip embedded by using rotational venography. Rigid bronchoscopy forceps were used to dissect the tip or hook of the filter from the wall of the IVC. The filter was then removed through the sheath by using the endobronchial forceps. Statistical analysis entailed calculating percentages, ranges, and means. The endobronchial forceps technique was used to successfully retrieve 109 of 114 (96%) tip-embedded IVC filters on an intention-to-treat basis. Five failures occurred in four patients in whom the technique was attempted but failed and one patient in whom retrieval was not attempted. Filters were in place for a mean of 465 days (range, 31-2976 days). The filters in this study included 10 Recovery, 33 G2, eight G2X, 11 Eclipse, one OptEase, six Option, 13 Günther Tulip, one ALN, and 31 Celect filters. Three minor complications and one major complication occurred, with no permanent sequelae. The endobronchial forceps technique can be safely used to remove tip-embedded IVC filters. © RSNA, 2014.
Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe
2014-01-01
Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967
Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit.
Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping
2006-05-29
Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss.
Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit
NASA Astrophysics Data System (ADS)
Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping
2006-05-01
Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss.
Analytically solvable chaotic oscillator based on a first-order filter.
Corron, Ned J; Cooper, Roy M; Blakely, Jonathan N
2016-02-01
A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform for any stable infinite-impulse response filter is chaotic.
Analytically solvable chaotic oscillator based on a first-order filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.
2016-02-15
A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform formore » any stable infinite-impulse response filter is chaotic.« less
An exact algorithm for optimal MAE stack filter design.
Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior
2007-02-01
We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.
Investigating the Use of the Intel Xeon Phi for Event Reconstruction
NASA Astrophysics Data System (ADS)
Sherman, Keegan; Gilfoyle, Gerard
2014-09-01
The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. Work supported by the University of Richmond and the US Department of Energy.
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.
2017-02-01
Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.
Efficient Execution Methods of Pivoting for Bulk Extraction of Entity-Attribute-Value-Modeled Data
Luo, Gang; Frey, Lewis J.
2017-01-01
Entity-attribute-value (EAV) tables are widely used to store data in electronic medical records and clinical study data management systems. Before they can be used by various analytical (e.g., data mining and machine learning) programs, EAV-modeled data usually must be transformed into conventional relational table format through pivot operations. This time-consuming and resource-intensive process is often performed repeatedly on a regular basis, e.g., to provide a daily refresh of the content in a clinical data warehouse. Thus, it would be beneficial to make pivot operations as efficient as possible. In this paper, we present three techniques for improving the efficiency of pivot operations: 1) filtering out EAV tuples related to unneeded clinical parameters early on; 2) supporting pivoting across multiple EAV tables; and 3) conducting multi-query optimization. We demonstrate the effectiveness of our techniques through implementation. We show that our optimized execution method of pivoting using these techniques significantly outperforms the current basic execution method of pivoting. Our techniques can be used to build a data extraction tool to simplify the specification of and improve the efficiency of extracting data from the EAV tables in electronic medical records and clinical study data management systems. PMID:25608318
NASA Astrophysics Data System (ADS)
Huh, Jangyong; Ji, Yunseo; Lee, Rena
2018-05-01
An X-ray control algorithm to modulate the X-ray intensity distribution over the FOV (field of view) has been developed by using numerical analysis and MCNP5, a particle transport simulation code on the basis of the Monte Carlo method. X-rays, which are widely used in medical diagnostic imaging, should be controlled in order to maximize the performance of the X-ray imaging system. However, transporting X-rays, like a liquid or a gas is conveyed through a physical form such as pipes, is not possible. In the present study, an X-ray control algorithm and technique to uniformize the Xray intensity projected on the image sensor were developed using a flattening filter and a collimator in order to alleviate the anisotropy of the distribution of X-rays due to intrinsic features of the X-ray generator. The proposed method, which is combined with MCNP5 modeling and numerical analysis, aimed to optimize a flattening filter and a collimator for a uniform distribution of X-rays. Their size and shape were estimated from the method. The simulation and the experimental results both showed that the method yielded an intensity distribution over an X-ray field of 6×4 cm2 at SID (source to image-receptor distance) of 5 cm with a uniformity of more than 90% when the flattening filter and the collimator were mounted on the system. The proposed algorithm and technique are not only confined to flattening filter development but can also be applied for other X-ray related research and development efforts.
NASA Astrophysics Data System (ADS)
Gong, W.; Meyer, F. J.
2013-12-01
It is well known that spatio-temporal the tropospheric phase signatures complicate the interpretation and detection of smaller magnitude deformation signals or unstudied motion fields. Several advanced time-series InSAR techniques were developed in the last decade that make assumptions about the stochastic properties of the signal components in interferometric phases to reduce atmospheric delay effects on surface deformation estimates. However, their need for large datasets to successfully separate the different phase contributions limits their performance if data is scarce and irregularly sampled. Limited SAR data coverage is true for many areas affected by geophysical deformation. This is either due to their low priority in mission programming, unfavorable ground coverage condition, or turbulent seasonal weather effects. In this paper, we present new adaptive atmospheric phase filtering algorithms that are specifically designed to reconstruct surface deformation signals from atmosphere-affected and irregularly sampled InSAR time series. The filters take advantage of auxiliary atmospheric delay information that is extracted from various sources, e.g. atmospheric weather models. They are embedded into a model-free Persistent Scatterer Interferometry (PSI) approach that was selected to accommodate non-linear deformation patterns that are often observed near volcanoes and earthquake zones. Two types of adaptive phase filters were developed that operate in the time dimension and separate atmosphere from deformation based on their different temporal correlation properties. Both filter types use the fact that atmospheric models can reliably predict the spatial statistics and signal power of atmospheric phase delay fields in order to automatically optimize the filter's shape parameters. In essence, both filter types will attempt to maximize the linear correlation between a-priori and the extracted atmospheric phase information. Topography-related phase components, orbit errors and the master atmospheric delays are first removed in a pre-processing step before the atmospheric filters are applied. The first adaptive filter type is using a filter kernel of Gaussian shape and is adaptively adjusting the width (defined in days) of this filter until the correlation of extracted and modeled atmospheric signal power is maximized. If atmospheric properties vary along the time series, this approach will lead to filter setting that are adapted to best reproduce atmospheric conditions at a certain observation epoch. Despite the superior performance of this first filter design, its Gaussian shape imposes non-physical relative weights onto acquisitions that ignore the known atmospheric noise in the data. Hence, in our second approach we are using atmospheric a-priori information to adaptively define the full shape of the atmospheric filter. For this process, we use a so-called normalized convolution (NC) approach that is often used in image reconstruction. Several NC designs will be presented in this paper and studied for relative performance. A cross-validation of all developed algorithms was done using both synthetic and real data. This validation showed designed filters are outperforming conventional filter methods that particularly useful for regions with limited data coverage or lack of a deformation field prior.
NASA Technical Reports Server (NTRS)
Freedman, A. P.; Steppe, J. A.
1995-01-01
The Jet Propulsion Laboratory Kalman Earth Orientation Filter (KEOF) uses several of the Earth rotation data sets available to generate optimally interpolated UT1 and LOD series to support spacecraft navigation. This paper compares use of various data sets within KEOF.
Denali, Tulip, and Option Inferior Vena Cava Filter Retrieval: A Single Center Experience.
Ramaswamy, Raja S; Jun, Emily; van Beek, Darren; Mani, Naganathan; Salter, Amber; Kim, Seung K; Akinwande, Olaguoke
2018-04-01
To compare the technical success of filter retrieval in Denali, Tulip, and Option inferior vena cava filters. A retrospective analysis of Denali, Gunther Tulip, and Option IVC filters was conducted. Retrieval failure rates, fluoroscopy time, sedation time, use of advanced retrieval techniques, and filter-related complications that led to retrieval failure were recorded. There were 107 Denali, 43 Option, and 39 Tulip filters deployed and removed with average dwell times of 93.5, 86.0, and 131 days, respectively. Retrieval failure rates were 0.9% for Denali, 11.6% for Option, and 5.1% for Tulip filters (Denali vs. Option p = 0.018; Denali vs. Tulip p = 0.159; Tulip vs. Option p = 0.045). Median fluoroscopy time for filter retrieval was 3.2 min for the Denali filter, 6.75 min for the Option filter, and 4.95 min for the Tulip filter (Denali vs. Option p < 0.01; Denali vs. Tulip p < 0.01; Tulip vs. Option p = 0.67). Advanced retrieval techniques were used in 0.9% of Denali filters, 21.1% in Option filters, and 10.8% in Tulip filters (Denali vs. Option p < 0.01; Denali vs. Tulip p < 0.01; Tulip vs. Option p < 0.01). Filter retrieval failure rates were significantly higher for the Option filter when compared to both the Denali and Tulip filters. Retrieval of the Denali filter required significantly less amount of fluoroscopy time and use of advanced retrieval techniques when compared to both the Option and Tulip filters. The findings of this study indicate easier retrieval of the Denali and Tulip IVC filters when compared to the Option filter.
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
Nute, Jessica L; Jacobsen, Megan C; Chandler, Adam; Cody, Dianna D; Schellingerhout, Dawid
2017-01-01
The aim of this study was to develop a diagnostic framework for distinguishing calcific from hemorrhagic cerebral lesions using dual-energy computed tomography (DECT) in an anthropomorphic phantom system. An anthropomorphic phantom was designed to mimic the CT imaging characteristics of the human head. Cylindrical lesion models containing either calcium or iron, mimicking calcification or hemorrhage, respectively, were developed to exhibit matching, and therefore indistinguishable, single-energy CT (SECT) attenuation values from 40 to 100 HU. These lesion models were fabricated at 0.5, 1, and 1.5 cm in diameter and positioned in simulated cerebrum and skull base locations within the anthropomorphic phantom. All lesion sizes were modeled in the cerebrum, while only 1.5-cm lesions were modeled in the skull base. Images were acquired using a GE 750HD CT scanner and an expansive dual-energy protocol that covered variations in dose (36.7-132.6 mGy CTDIvol, n = 12), image thickness (0.625-5 mm, n = 4), and reconstruction filter (soft, standard, detail, n = 3) for a total of 144 unique technique combinations. Images representing each technique combination were reconstructed into water and calcium material density images, as well as a monoenergetic image chosen to mimic the attenuation of a 120-kVp SECT scan. A true single-energy routine brain protocol was also included for verification of lesion SECT attenuation. Points representing the 3 dual-energy reconstructions were plotted into a 3-dimensional space (water [milligram/milliliter], calcium [milligram/milliliter], monoenergetic Hounsfield unit as x, y, and z axes, respectively), and the distribution of points analyzed using 2 approaches: support vector machines and a simple geometric bisector (GB). Each analysis yielded a plane of optimal differentiation between the calcification and hemorrhage lesion model distributions. By comparing the predicted lesion composition to the known lesion composition, we identified the optimal combination of CTDIvol, image thickness, and reconstruction filter to maximize differentiation between the lesion model types. To validate these results, a new set of hemorrhage and calcification lesion models were created, scanned in a blinded fashion, and prospectively classified using the planes of differentiation derived from support vector machine and GB methods. Accuracy of differentiation improved with increasing dose (CTDIvol) and image thickness. Reconstruction filter had no effect on the accuracy of differentiation. Using an optimized protocol consisting of the maximum CTDIvol of 132.6 mGy, 5-mm-thick images, and a standard filter, hemorrhagic and calcific lesion models with equal SECT attenuation (Hounsfield unit) were differentiated with over 90% accuracy down to 70 HU for skull base lesions of 1.5 cm, and down to 100 HU, 60 HU, and 60 HU for cerebrum lesions of 0.5, 1.0, and 1.5 cm, respectively. The analytic method that yielded the best results was a simple GB plane through the 3-dimensional DECT space. In the validation study, 96% of unknown lesions were correctly classified across all lesion sizes and locations investigated. We define the optimal scan parameters and expected limitations for the accurate classification of hemorrhagic versus calcific cerebral lesions in an anthropomorphic phantom with DECT. Although our proposed DECT protocol represents an increase in dose compared with routine brain CT, this method is intended as a specialized evaluation of potential brain hemorrhage and is thus counterbalanced by increased diagnostic benefit. This work provides justification for the application of this technique in human clinical trials.
Speeding Up the Bilateral Filter: A Joint Acceleration Way.
Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng
2016-06-01
Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.
Chen, Wentao; Zhang, Weidong
2009-10-01
In an optical disk drive servo system, to attenuate the external periodic disturbances induced by inevitable disk eccentricity, repetitive control has been used successfully. The performance of a repetitive controller greatly depends on the bandwidth of the low-pass filter included in the repetitive controller. However, owing to the plant uncertainty and system stability, it is difficult to maximize the bandwidth of the low-pass filter. In this paper, we propose an optimality based repetitive controller design method for the track-following servo system with norm-bounded uncertainties. By embedding a lead compensator in the repetitive controller, both the system gain at periodic signal's harmonics and the bandwidth of the low-pass filter are greatly increased. The optimal values of the repetitive controller's parameters are obtained by solving two optimization problems. Simulation and experimental results are provided to illustrate the effectiveness of the proposed method.
Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude
NASA Technical Reports Server (NTRS)
Sedlak, J.
1994-01-01
Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.
Selection of optimal spectral sensitivity functions for color filter arrays.
Parmar, Manu; Reeves, Stanley J
2010-12-01
A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.
Precision of proportion estimation with binary compressed Raman spectrum.
Réfrégier, Philippe; Scotté, Camille; de Aguiar, Hilton B; Rigneault, Hervé; Galland, Frédéric
2018-01-01
The precision of proportion estimation with binary filtering of a Raman spectrum mixture is analyzed when the number of binary filters is equal to the number of present species and when the measurements are corrupted with Poisson photon noise. It is shown that the Cramer-Rao bound provides a useful methodology to analyze the performance of such an approach, in particular when the binary filters are orthogonal. It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer-Rao bound). Evolutions of the Cramer-Rao bound are analyzed when the measuring times are optimized or when the considered proportion for binary filter synthesis is not optimized. Two strategies for the appropriate choice of this considered proportion are also analyzed for the binary filter synthesis.
Rajab, Maher I
2011-11-01
Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.
NASA Astrophysics Data System (ADS)
Poppeliers, C.; Preston, L. A.
2017-12-01
Measurements of seismic surface wave dispersion can be used to infer the structure of the Earth's subsurface. Typically, to identify group- and phase-velocity, a series of narrow-band filters are applied to surface wave seismograms. Frequency dependent arrival times of surface waves can then be identified from the resulting suite of narrow band seismograms. The frequency-dependent velocity estimates are then inverted for subsurface velocity structure. However, this technique has no method to estimate the uncertainty of the measured surface wave velocities, and subsequently there is no estimate of uncertainty on, for example, tomographic results. For the work here, we explore using the multiwavelet transform (MWT) as an alternate method to estimate surface wave speeds. The MWT decomposes a signal similarly to the conventional filter bank technique, but with two primary advantages: 1) the time-frequency localization is optimized in regard to the time-frequency tradeoff, and 2) we can use the MWT to estimate the uncertainty of the resulting surface wave group- and phase-velocities. The uncertainties of the surface wave speed measurements can then be propagated into tomographic inversions to provide uncertainties of resolved Earth structure. As proof-of-concept, we apply our technique to four seismic ambient noise correlograms that were collected from the University of Nevada Reno seismic network near the Nevada National Security Site. We invert the estimated group- and phase-velocities, as well the uncertainties, for 1-D Earth structure for each station pair. These preliminary results generally agree with 1-D velocities that are obtained from inverting dispersion curves estimated from a conventional Gaussian filter bank.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ensslin, Torsten A.; Frommert, Mona
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less
Ultrasonic tracking of shear waves using a particle filter
Ingle, Atul N.; Ma, Chi; Varghese, Tomy
2015-01-01
Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761
Engineering tradeoff problems viewed as multiple objective optimizations and the VODCA methodology
NASA Astrophysics Data System (ADS)
Morgan, T. W.; Thurgood, R. L.
1984-05-01
This paper summarizes a rational model for making engineering tradeoff decisions. The model is a hybrid from the fields of social welfare economics, communications, and operations research. A solution methodology (Vector Optimization Decision Convergence Algorithm - VODCA) firmly grounded in the economic model is developed both conceptually and mathematically. The primary objective for developing the VODCA methodology was to improve the process for extracting relative value information about the objectives from the appropriate decision makers. This objective was accomplished by employing data filtering techniques to increase the consistency of the relative value information and decrease the amount of information required. VODCA is applied to a simplified hypothetical tradeoff decision problem. Possible use of multiple objective analysis concepts and the VODCA methodology in product-line development and market research are discussed.
Symmetric Phase Only Filtering for Improved DPIV Data Processing
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
2006-01-01
The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
NASA Technical Reports Server (NTRS)
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.
NASA Technical Reports Server (NTRS)
Garren, J. F., Jr.; Niessen, F. R.; Abbott, T. S.; Yenni, K. R.
1977-01-01
A modified complementary filtering technique for estimating aircraft roll rate was developed and flown in a research helicopter to determine whether higher gains could be achieved. Use of this technique did, in fact, permit a substantial increase in system frequency bandwidth because, in comparison with first-order filtering, it reduced both noise amplification and control limit-cycle tendencies.
Sorption and desorption of arsenic to ferrihydrite in a sand filter.
Jessen, Soren; Larsen, Flemming; Koch, Christian Bender; Arvin, Erik
2005-10-15
Elevated arsenic concentrations in drinking water occur in many places around the world. Arsenic is deleterious to humans, and consequently, As water treatment techniques are sought. To optimize arsenic removal, sorption and desorption processes were studied at a drinking water treatment plant with aeration and sand filtration of ferrous iron rich groundwater at Elmevej Water Works, Fensmark, Denmark. Filter sand and pore water were sampled along depth profiles in the filters. The sand was coated with a 100-300 microm thick layer of porous Si-Ca-As-contaning iron oxide (As/Fe = 0.17) with locally some manganese oxide. The iron oxide was identified as a Si-stabilized abiotically formed two-line ferrihydrite with a magnetic hyperfine field of 45.8 T at 5 K. The raw water has an As concentration of 25 microg/L, predominantly as As(II). As the water passes through the filters, As(III) is oxidized to As(V) and the total concentrations drop asymptotically to a approximately 15 microg/L equilibrium concentration. Mn is released to the pore water, indicating the existence of reactive manganese oxides within the oxide coating, which probably play a role for the rapid As(III) oxidation. The As removal in the sand filters appears controlled by sorption equilibrium onto the ferrihydrite. By addition of ferrous chloride (3.65 mg of Fe(II)/L) to the water stream between two serially connected filters, a 3 microg/L As concentration is created in the water that infiltrates into the second sand filter. However, as water flow is reestablished through the second filter, As desorbs from the ferrihydrite and increases until the 15 microg/L equilibrium concentration. Sequential chemical extractions and geometrical estimates of the fraction of surface-associated As suggest that up to 40% of the total As can be remobilized in response to changes in the water chemistry in the sand filter.
NASA Astrophysics Data System (ADS)
Schröder, Markus; Brown, Alex
2009-10-01
We present a modified version of a previously published algorithm (Gollub et al 2008 Phys. Rev. Lett.101 073002) for obtaining an optimized laser field with more general restrictions on the search space of the optimal field. The modification leads to enforcement of the constraints on the optimal field while maintaining good convergence behaviour in most cases. We demonstrate the general applicability of the algorithm by imposing constraints on the temporal symmetry of the optimal fields. The temporal symmetry is used to reduce the number of transitions that have to be optimized for quantum gate operations that involve inversion (NOT gate) or partial inversion (Hadamard gate) of the qubits in a three-dimensional model of ammonia.
Application of higher-order cepstral techniques in problems of fetal heart signal extraction
NASA Astrophysics Data System (ADS)
Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.
1996-10-01
Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.
Design of a composite filter realizable on practical spatial light modulators
NASA Technical Reports Server (NTRS)
Rajan, P. K.; Ramakrishnan, Ramachandran
1994-01-01
Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.
2011-04-01
Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutionsmore » and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.« less
Gatti, Davide; Galzerano, Gianluca; Laporta, Paolo; Longhi, Stefano; Janner, Davide; Guglierame, Andrea; Belmonte, Michele
2008-07-01
Optimal demodulation of differential phase-shift keying signals at 10 Gbit/s is experimentally demonstrated using a specially designed structured fiber Bragg grating composed by Fabry-Perot coupled cavities. Bit-error-rate measurements show that, as compared with a conventional Gaussian-shaped filter, our demodulator gives approximately 2.8 dB performance improvement.
Optimal causal inference: estimating stored information and approximating causal architecture.
Still, Susanne; Crutchfield, James P; Ellison, Christopher J
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.
Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination
NASA Technical Reports Server (NTRS)
Downie, John D.
1992-01-01
Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-16
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution
NASA Astrophysics Data System (ADS)
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-01
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Fixed-frequency and Frequency-agile (au, HTS) Microstrip Bandstop Filters for L-band Applications
NASA Technical Reports Server (NTRS)
Saenz, Eileen M.; Subramanyam, Guru; VanKeuls, Fred W.; Chen, Chonglin; Miranda, Felix A.
2001-01-01
In this work, we report on the performance of a highly selective, compact 1.83 x 2.08 cm(exp 2) (approx. 0.72 x 0.82 in(exp 2) microstrip line bandstop filter of YBa2CU3O(7-delta) (YBCO) on LaAlO3 (LAO) substrate. The filter is designed for a center frequency of 1.623 GHz for a bandwidth at 3 dB from reference baseline of less than 5.15 MHz, and a bandstop rejection of 30 dB or better. The design and optimization of the filter was performed using Zeland's IE3D circuit simulator. The optimized design was used to fabricate gold (Au) and High-Temperature Superconductor (HTS) versions of the filter. We have also studied an electronically tunable version of the same filter. Tunability of the bandstop characteristics is achieved by the integration of a thin film conductor (Au or HTS) and the nonlinear dielectric ferroelectric SrTiO3 in a conductor/ferroelectric/dielectric modified microstrip configuration. The performance of these filters and comparison with the simulated data will be presented.
Forecasting Geomagnetic Activity Using Kalman Filters
NASA Astrophysics Data System (ADS)
Veeramani, T.; Sharma, A.
2006-05-01
The coupling of energy from the solar wind to the magnetosphere leads to the geomagnetic activity in the form of storms and substorms and are characterized by indices such as AL, Dst and Kp. The geomagnetic activity has been predicted near-real time using local linear filter models of the system dynamics wherein the time series of the input solar wind and the output magnetospheric response were used to reconstruct the phase space of the system by a time-delay embedding technique. Recently, the radiation belt dynamics have been studied using a adaptive linear state space model [Rigler et al. 2004]. This was achieved by assuming a linear autoregressive equation for the underlying process and an adaptive identification of the model parameters using a Kalman filter approach. We use such a model for predicting the geomagnetic activity. In the case of substorms, the Bargatze et al [1985] data set yields persistence like behaviour when a time resolution of 2.5 minutes was used to test the model for the prediction of the AL index. Unlike the local linear filters, which are driven by the solar wind input without feedback from the observations, the Kalman filter makes use of the observations as and when available to optimally update the model parameters. The update procedure requires the prediction intervals to be long enough so that the forecasts can be used in practice. The time resolution of the data suitable for such forecasting is studied by taking averages over different durations.
NASA Astrophysics Data System (ADS)
Bania, Piotr; Baranowski, Jerzy
2018-02-01
Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.
Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2015-11-30
Common spatial pattern (CSP) has been most popularly applied to motor-imagery (MI) feature extraction for classification in brain-computer interface (BCI) application. Successful application of CSP depends on the filter band selection to a large degree. However, the most proper band is typically subject-specific and can hardly be determined manually. This study proposes a sparse filter band common spatial pattern (SFBCSP) for optimizing the spatial patterns. SFBCSP estimates CSP features on multiple signals that are filtered from raw EEG data at a set of overlapping bands. The filter bands that result in significant CSP features are then selected in a supervised way by exploiting sparse regression. A support vector machine (SVM) is implemented on the selected features for MI classification. Two public EEG datasets (BCI Competition III dataset IVa and BCI Competition IV IIb) are used to validate the proposed SFBCSP method. Experimental results demonstrate that SFBCSP help improve the classification performance of MI. The optimized spatial patterns by SFBCSP give overall better MI classification accuracy in comparison with several competing methods. The proposed SFBCSP is a potential method for improving the performance of MI-based BCI. Copyright © 2015 Elsevier B.V. All rights reserved.
Adaptive filtering in biological signal processing.
Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A
1990-01-01
The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.
NASA Astrophysics Data System (ADS)
Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas
2016-12-01
This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.
Monorail system for percutaneous repositioning of the Greenfield vena caval filter.
Guthaner, D F; Wyatt, J O; Mehigan, J T; Wright, A M; Breen, J F; Wexler, L
1990-09-01
The authors describe a technique for removing or repositioning a malpositioned Greenfield inferior vena caval filter. A "monorail" system was used, in which a wire was passed from the femoral vein through the apical hole in the filter and out the internal jugular vein; the wire was held taut from above and below and thus facilitated repositioning or removal of the filter. The technique was used successfully in two cases.
Fourier Spectral Filter Array for Optimal Multispectral Imaging.
Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo
2016-04-01
Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Szczykutowicz, T; Bayouth, J
Purpose: To compare the ability of two dual-energy CT techniques, a novel split-filter single-source technique of superior temporal resolution against an established sequential-scan technique, to remove iodine contrast from images with minimal impact on CT number accuracy. Methods: A phantom containing 8 tissue substitute materials and vials of varying iodine concentrations (1.7–20.1 mg I /mL) was imaged using a Siemens Edge CT scanner. Dual-energy virtual non-contrast (VNC) images were generated using the novel split-filter technique, in which a 120kVp spectrum is filtered by tin and gold to create high- and low-energy spectra with < 1 second temporal separation between themore » acquisition of low- and high-energy data. Additionally, VNC images were generated with the sequential-scan technique (80 and 140kVp) for comparison. CT number accuracy was evaluated for all materials at 15, 25, and 35mGy CTDIvol. Results: The spectral separation was greater for the sequential-scan technique than the split-filter technique with dual-energy ratios of 2.18 and 1.26, respectively. Both techniques successfully removed iodine contrast, resulting in mean CT numbers within 60HU of 0HU (split-filter) and 40HU of 0HU (sequential-scan) for all iodine concentrations. Additionally, for iodine vials of varying diameter (2–20 mm) with the same concentration (9.9 mg I /mL), the system accurately detected iodine for all sizes investigated. Both dual-energy techniques resulted in reduced CT numbers for bone materials (by >400HU for the densest bone). Increasing the imaging dose did not improve the CT number accuracy for bone in VNC images. Conclusion: VNC images from the split-filter technique successfully removed iodine contrast. These results demonstrate a potential for improving dose calculation accuracy and reducing patient imaging dose, while achieving superior temporal resolution in comparison sequential scans. For both techniques, inaccuracies in CT numbers for bone materials necessitate consideration for radiation therapy treatment planning.« less
Video-signal improvement using comb filtering techniques.
NASA Technical Reports Server (NTRS)
Arndt, G. D.; Stuber, F. M.; Panneton, R. J.
1973-01-01
Significant improvement in the signal-to-noise performance of television signals has been obtained through the application of comb filtering techniques. This improvement is achieved by removing the inherent redundancy in the television signal through linear prediction and by utilizing the unique noise-rejection characteristics of the receiver comb filter. Theoretical and experimental results describe the signal-to-noise ratio and picture-quality improvement obtained through the use of baseband comb filters and the implementation of a comb network as the loop filter in a phase-lock-loop demodulator. Attention is given to the fact that noise becomes correlated when processed by the receiver comb filter.
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter
Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.
2012-01-01
A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018
Designing manufacturable filters for a 16-band plenoptic camera using differential evolution
NASA Astrophysics Data System (ADS)
Doster, Timothy; Olson, Colin C.; Fleet, Erin; Yetzbacher, Michael; Kanaev, Andrey; Lebow, Paul; Leathers, Robert
2017-05-01
A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.
Automatic x-ray image contrast enhancement based on parameter auto-optimization.
Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan
2017-11-01
Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more accurate treatment setup and facilitating the subsequent offline review and verification. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Optimality problem of network topology in stocks market analysis
NASA Astrophysics Data System (ADS)
Djauhari, Maman Abdurachman; Gan, Siew Lee
2015-02-01
Since its introduction fifteen years ago, minimal spanning tree has become an indispensible tool in econophysics. It is to filter the important economic information contained in a complex system of financial markets' commodities. Here we show that, in general, that tool is not optimal in terms of topological properties. Consequently, the economic interpretation of the filtered information might be misleading. To overcome that non-optimality problem, a set of criteria and a selection procedure of an optimal minimal spanning tree will be developed. By using New York Stock Exchange data, the advantages of the proposed method will be illustrated in terms of the power-law of degree distribution.
Höckel, David; Koch, Lars; Martin, Eugen; Benson, Oliver
2009-10-15
We describe a Fabry-Perot-based spectral filter for free-space quantum key distribution (QKD). A multipass etalon filter was built, and its performance was studied. The whole filter setup was carefully optimized to add less than 2 dB attenuation to a signal beam but block stray light by 21 dB. Simulations show that such a filter might be sufficient to allow QKD satellite downlinks during daytime with the current technology.
Heuristic-based scheduling algorithm for high level synthesis
NASA Technical Reports Server (NTRS)
Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye
1992-01-01
A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.
Optimal use of electrophysiological indicators of muscular effort and fatigue
NASA Technical Reports Server (NTRS)
Updike, O. L.
1981-01-01
Electromyograms (EMG) from working muscles convey information on effort and fatigue. Their application, e.g., to assess the demands of vehicle control tasks, is complicated by the cooperative action of sets of muscles, by both intrinsic and imposed filtering, and by numerous other sources of variation. Fourier analyses of these noise like signals offer one approach to interpretation; downward spectral shifts accompany fatigue. Techniques are being sought (in both time and frequency domains) for further condensing the wideband EMG signals, while retaining essential information, into a concise 'state vector' usable in comparing control system designs.
Focus-based filtering + clustering technique for power-law networks with small world phenomenon
NASA Astrophysics Data System (ADS)
Boutin, François; Thièvre, Jérôme; Hascoët, Mountaz
2006-01-01
Realistic interaction networks usually present two main properties: a power-law degree distribution and a small world behavior. Few nodes are linked to many nodes and adjacent nodes are likely to share common neighbors. Moreover, graph structure usually presents a dense core that is difficult to explore with classical filtering and clustering techniques. In this paper, we propose a new filtering technique accounting for a user-focus. This technique extracts a tree-like graph with also power-law degree distribution and small world behavior. Resulting structure is easily drawn with classical force-directed drawing algorithms. It is also quickly clustered and displayed into a multi-level silhouette tree (MuSi-Tree) from any user-focus. We built a new graph filtering + clustering + drawing API and report a case study.
NASA Astrophysics Data System (ADS)
Lawless, Phil A.; Rodes, Charles E.; Ensor, David S.
A multiwavelength optical absorption technique has been developed for Teflon filters used for personal exposure sampling with sufficient sensitivity to allow apportionments of environmental tobacco smoke and soot (black) carbon to be made. Measurements on blank filters show that the filter material itself contributes relatively little to the total absorbance and filters from the same lot have similar characteristics; this makes retrospective analysis of filters quite feasible. Using an integrating sphere radiometer and multiple wavelengths to provide specificity, the determination of tobacco smoke and carbon with reasonable accuracy is possible on filters not characterized before exposure. This technique provides a low cost, non-destructive exposure assessment alternative to both standard thermo-gravimetric elemental carbon evaluations on quartz filters and cotinine analyses from urine or saliva samples. The method allows the same sample filter to be used for assessment of mass, carbon, and tobacco smoke without affecting the deposit.
Comparison of filtering methods for extracellular gastric slow wave recordings.
Paskaranandavadivel, Niranchan; O'Grady, Gregory; Du, Peng; Cheng, Leo K
2013-01-01
Extracellular recordings are used to define gastric slow wave propagation. Signal filtering is a key step in the analysis and interpretation of extracellular slow wave data; however, there is controversy and uncertainty regarding the appropriate filtering settings. This study investigated the effect of various standard filters on the morphology and measurement of extracellular gastric slow waves. Experimental extracellular gastric slow waves were recorded from the serosal surface of the stomach from pigs and humans. Four digital filters: finite impulse response filter (0.05-1 Hz); Savitzky-Golay filter (0-1.98 Hz); Bessel filter (2-100 Hz); and Butterworth filter (5-100 Hz); were applied on extracellular gastric slow wave signals to compare the changes temporally (morphology of the signal) and spectrally (signals in the frequency domain). The extracellular slow wave activity is represented in the frequency domain by a dominant frequency and its associated harmonics in diminishing power. Optimal filters apply cutoff frequencies consistent with the dominant slow wave frequency (3-5 cpm) and main harmonics (up to ≈ 2 Hz). Applying filters with cutoff frequencies above or below the dominant and harmonic frequencies was found to distort or eliminate slow wave signal content. Investigators must be cognizant of these optimal filtering practices when detecting, analyzing, and interpreting extracellular slow wave recordings. The use of frequency domain analysis is important for identifying the dominant and harmonics of the signal of interest. Capturing the dominant frequency and major harmonics of slow wave is crucial for accurate representation of slow wave activity in the time domain. Standardized filter settings should be determined. © 2012 Blackwell Publishing Ltd.
[Improvement of magnetic resonance phase unwrapping method based on Goldstein Branch-cut algorithm].
Guo, Lin; Kang, Lili; Wang, Dandan
2013-02-01
The phase information of magnetic resonance (MR) phase image can be used in many MR imaging techniques, but phase wrapping of the images often results in inaccurate phase information and phase unwrapping is essential for MR imaging techniques. In this paper we analyze the causes of errors in phase unwrapping with the commonly used Goldstein Brunch-cut algorithm and propose an improved algorithm. During the unwrapping process, masking, filtering, dipole- remover preprocessor, and the Prim algorithm of the minimum spanning tree were introduced to optimize the residues essential for the Goldstein Brunch-cut algorithm. Experimental results showed that the residues, branch-cuts and continuous unwrapped phase surface were efficiently reduced and the quality of MR phase images was obviously improved with the proposed method.
A distributed, dynamic, parallel computational model: the role of noise in velocity storage
Merfeld, Daniel M.
2012-01-01
Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288
Optimum color filters for CCD digital cameras
NASA Astrophysics Data System (ADS)
Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl
1993-12-01
As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.
AMICO: optimized detection of galaxy clusters in photometric surveys
NASA Astrophysics Data System (ADS)
Bellagamba, Fabio; Roncarelli, Mauro; Maturi, Matteo; Moscardini, Lauro
2018-02-01
We present Adaptive Matched Identifier of Clustered Objects (AMICO), a new algorithm for the detection of galaxy clusters in photometric surveys. AMICO is based on the Optimal Filtering technique, which allows to maximize the signal-to-noise ratio (S/N) of the clusters. In this work, we focus on the new iterative approach to the extraction of cluster candidates from the map produced by the filter. In particular, we provide a definition of membership probability for the galaxies close to any cluster candidate, which allows us to remove its imprint from the map, allowing the detection of smaller structures. As demonstrated in our tests, this method allows the deblending of close-by and aligned structures in more than 50 per cent of the cases for objects at radial distance equal to 0.5 × R200 or redshift distance equal to 2 × σz, being σz the typical uncertainty of photometric redshifts. Running AMICO on mocks derived from N-body simulations and semi-analytical modelling of the galaxy evolution, we obtain a consistent mass-amplitude relation through the redshift range of 0.3 < z < 1, with a logarithmic slope of ∼0.55 and a logarithmic scatter of ∼0.14. The fraction of false detections is steeply decreasing with S/N and negligible at S/N > 5.
Supervoxels for graph cuts-based deformable image registration using guided image filtering
NASA Astrophysics Data System (ADS)
Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.
2017-11-01
We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.
Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis
NASA Technical Reports Server (NTRS)
Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.;
2015-01-01
The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.
Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering.
Szmul, Adam; Papież, Bartłomiej W; Hallack, Andre; Grau, Vicente; Schnabel, Julia A
2017-10-04
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.
Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation
Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan
2014-01-01
Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432
Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering
Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.
2017-01-01
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model ‘sliding motion’. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark. PMID:29225433
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
Filtering Meteoroid Flights Using Multiple Unscented Kalman Filters
NASA Astrophysics Data System (ADS)
Sansom, E. K.; Bland, P. A.; Rutten, M. G.; Paxman, J.; Towner, M. C.
2016-11-01
Estimator algorithms are immensely versatile and powerful tools that can be applied to any problem where a dynamic system can be modeled by a set of equations and where observations are available. A well designed estimator enables system states to be optimally predicted and errors to be rigorously quantified. Unscented Kalman filters (UKFs) and interactive multiple models can be found in methods from satellite tracking to self-driving cars. The luminous trajectory of the Bunburra Rockhole fireball was observed by the Desert Fireball Network in mid-2007. The recorded data set is used in this paper to examine the application of these two techniques as a viable approach to characterizing fireball dynamics. The nonlinear, single-body system of equations, used to model meteoroid entry through the atmosphere, is challenged by gross fragmentation events that may occur. The incorporation of the UKF within an interactive multiple model smoother provides a likely solution for when fragmentation events may occur as well as providing a statistical analysis of the state uncertainties. In addition to these benefits, another advantage of this approach is its automatability for use within an image processing pipeline to facilitate large fireball data analyses and meteorite recoveries.
Electron mean-free-path filtering in Dirac material for improved thermoelectric performance.
Liu, Te-Huan; Zhou, Jiawei; Li, Mingda; Ding, Zhiwei; Song, Qichen; Liao, Bolin; Fu, Liang; Chen, Gang
2018-01-30
Recent advancements in thermoelectric materials have largely benefited from various approaches, including band engineering and defect optimization, among which the nanostructuring technique presents a promising way to improve the thermoelectric figure of merit ( zT ) by means of reducing the characteristic length of the nanostructure, which relies on the belief that phonons' mean free paths (MFPs) are typically much longer than electrons'. Pushing the nanostructure sizes down to the length scale dictated by electron MFPs, however, has hitherto been overlooked as it inevitably sacrifices electrical conduction. Here we report through ab initio simulations that Dirac material can overcome this limitation. The monotonically decreasing trend of the electron MFP allows filtering of long-MFP electrons that are detrimental to the Seebeck coefficient, leading to a dramatically enhanced power factor. Using SnTe as a material platform, we uncover this MFP filtering effect as arising from its unique nonparabolic Dirac band dispersion. Room-temperature zT can be enhanced by nearly a factor of 3 if one designs nanostructures with grain sizes of ∼10 nm. Our work broadens the scope of the nanostructuring approach for improving the thermoelectric performance, especially for materials with topologically nontrivial electronic dynamics.
Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam
2016-06-01
The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
Inferring neural activity from BOLD signals through nonlinear optimization.
Vakorin, Vasily A; Krakovska, Olga O; Borowsky, Ron; Sarty, Gordon E
2007-11-01
The blood oxygen level-dependent (BOLD) fMRI signal does not measure neuronal activity directly. This fact is a key concern for interpreting functional imaging data based on BOLD. Mathematical models describing the path from neural activity to the BOLD response allow us to numerically solve the inverse problem of estimating the timing and amplitude of the neuronal activity underlying the BOLD signal. In fact, these models can be viewed as an advanced substitute for the impulse response function. In this work, the issue of estimating the dynamics of neuronal activity from the observed BOLD signal is considered within the framework of optimization problems. The model is based on the extended "balloon" model and describes the conversion of neuronal signals into the BOLD response through the transitional dynamics of the blood flow-inducing signal, cerebral blood flow, cerebral blood volume and deoxyhemoglobin concentration. Global optimization techniques are applied to find a control input (the neuronal activity and/or the biophysical parameters in the model) that causes the system to follow an admissible solution to minimize discrepancy between model and experimental data. As an alternative to a local linearization (LL) filtering scheme, the optimization method escapes the linearization of the transition system and provides a possibility to search for the global optimum, avoiding spurious local minima. We have found that the dynamics of the neural signals and the physiological variables as well as the biophysical parameters can be robustly reconstructed from the BOLD responses. Furthermore, it is shown that spiking off/on dynamics of the neural activity is the natural mathematical solution of the model. Incorporating, in addition, the expansion of the neural input by smooth basis functions, representing a low-pass filtering, allows us to model local field potential (LFP) solutions instead of spiking solutions.
Intensity transform and Wiener filter in measurement of blood flow in arteriography
NASA Astrophysics Data System (ADS)
Nunes, Polyana F.; Franco, Marcelo L. N.; Filho, João. B. D.; Patrocínio, Ana C.
2015-03-01
Using the arteriography examination, it is possible to check anomalies in blood vessels and diseases such as stroke, stenosis, bleeding and especially in the diagnosis of Encephalic Death in comatose individuals. Encephalic death can be diagnosed only when there is complete interruption of all brain functions, and hence the blood stream. During the examination, there may be some interference on the sensors, such as environmental factors, poor maintenance of equipment, patient movement, among other interference, which can directly affect the noise produced in angiography images. Then, we need to use digital image processing techniques to minimize this noise and improve the pixel count. Therefore, this paper proposes to use median filter and enhancement techniques for transformation of intensity using the sigmoid function together with the Wiener filter so you can get less noisy images. It's been realized two filtering techniques to remove the noise of images, one with the median filter and the other with the Wiener filter along the sigmoid function. For 14 tests quantified, including 7 Encephalic Death and 7 other cases, the technique that achieved a most satisfactory number of pixels quantified, also presenting a lesser amount of noise, is the Wiener filter sigmoid function, and in this case used with 0.03 cuttof.
Chang, Herng-Hua; Chang, Yu-Ning
2017-04-01
Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached. © 2017 American Association of Physicists in Medicine.
IVC filter retrieval in adolescents: experience in a tertiary pediatric center.
Guzman, Anthony K; Zahra, Mahmoud; Trerotola, Scott O; Raffini, Leslie J; Itkin, Maxim; Keller, Marc S; Cahill, Anne Marie
2016-04-01
Inferior vena cava (IVC) filters are commonly implanted with the intent to prevent life-threatening pulmonary embolism in at-risk patients with contraindications to anticoagulation. Various studies have reported increases in the rate of venous thromboembolism within the pediatric population. The utility and safety of IVC filters in children has not yet been fully defined. To describe the technique and adjunctive maneuvers of IVC filter removal in children, demonstrate its technical success and identify complications. A retrospective 10-year review was performed of 20 children (13 male, 7 female), mean age: 15.1 years (range: 12-19 years), who underwent IVC filter retrieval. Eleven of 20 (55%) were placed in our institution. Electronic medical records were reviewed for filter characteristics, retrieval technique, technical success and complications. The technical success rate was 100%. Placement indications included: deep venous thrombosis with a contraindication to anticoagulation (10/20, 50%), free-floating thrombus (4/20, 20%), post-trauma pulmonary embolism prophylaxis (3/20, 15%) and pre-thrombolysis pulmonary patient (1/20, 5%). The mean implantation period was 63 days (range: 20-270 days). Standard retrieval was performed in 17/20 patients (85%). Adjunctive techniques were performed in 3/20 patients (15%) and included the double-snare technique, balloon assistance and endobronchial forceps retrieval. Median procedure time was 60 min (range: 45-240 min). Pre-retrieval cavogram demonstrated filter tilt in 5/20 patients (25%) with a mean angle of 17° (range: 8-40). Pre-retrieval CT demonstrated strut wall penetration and tip embedment in one patient each. There were two procedure-related complications: IVC mural dissection noted on venography in one patient and snare catheter fracture requiring retrieval in one patient. There were no early or late complications. In children, IVC filter retrieval can be performed safely but may be challenging, especially in cases of filter tilt or embedding. Adjunctive techniques may increase filter retrieval rates.
Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation
NASA Technical Reports Server (NTRS)
Rakoczy, John M.; Herren, Kenneth A.
2008-01-01
A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.
Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation
NASA Technical Reports Server (NTRS)
Rakoczy, John; Herren, Kenneth
2007-01-01
A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.
Modeling of direct detection Doppler wind lidar. I. The edge technique.
McKay, J A
1998-09-20
Analytic models, based on a convolution of a Fabry-Perot etalon transfer function with a Gaussian spectral source, are developed for the shot-noise-limited measurement precision of Doppler wind lidars based on the edge filter technique by use of either molecular or aerosol atmospheric backscatter. The Rayleigh backscatter formulation yields a map of theoretical sensitivity versus etalon parameters, permitting design optimization and showing that the optimal system will have a Doppler measurement uncertainty no better than approximately 2.4 times that of a perfect, lossless receiver. An extension of the models to include the effect of limited etalon aperture leads to a condition for the minimum aperture required to match light collection optics. It is shown that, depending on the choice of operating point, the etalon aperture finesse must be 4-15 to avoid degradation of measurement precision. A convenient, closed-form expression for the measurement precision is obtained for spectrally narrow backscatter and is shown to be useful for backscatter that is spectrally broad as well. The models are extended to include extrinsic noise, such as solar background or the Rayleigh background on an aerosol Doppler lidar. A comparison of the model predictions with experiment has not yet been possible, but a comparison with detailed instrument modeling by McGill and Spinhirne shows satisfactory agreement. The models derived here will be more conveniently implemented than McGill and Spinhirne's and more readily permit physical insights to the optimization and limitations of the double-edge technique.
Interference Alignment With Partial CSI Feedback in MIMO Cellular Networks
NASA Astrophysics Data System (ADS)
Rao, Xiongbin; Lau, Vincent K. N.
2014-04-01
Interference alignment (IA) is a linear precoding strategy that can achieve optimal capacity scaling at high SNR in interference networks. However, most existing IA designs require full channel state information (CSI) at the transmitters, which would lead to significant CSI signaling overhead. There are two techniques, namely CSI quantization and CSI feedback filtering, to reduce the CSI feedback overhead. In this paper, we consider IA processing with CSI feedback filtering in MIMO cellular networks. We introduce a novel metric, namely the feedback dimension, to quantify the first order CSI feedback cost associated with the CSI feedback filtering. The CSI feedback filtering poses several important challenges in IA processing. First, there is a hidden partial CSI knowledge constraint in IA precoder design which cannot be handled using conventional IA design methodology. Furthermore, existing results on the feasibility conditions of IA cannot be applied due to the partial CSI knowledge. Finally, it is very challenging to find out how much CSI feedback is actually needed to support IA processing. We shall address the above challenges and propose a new IA feasibility condition under partial CSIT knowledge in MIMO cellular networks. Based on this, we consider the CSI feedback profile design subject to the degrees of freedom requirements, and we derive closed-form trade-off results between the CSI feedback cost and IA performance in MIMO cellular networks.
A simulation study of turbofan engine deterioration estimation using Kalman filtering techniques
NASA Technical Reports Server (NTRS)
Lambert, Heather H.
1991-01-01
Deterioration of engine components may cause off-normal engine operation. The result is an unecessary loss of performance, because the fixed schedules are designed to accommodate a wide range of engine health. These fixed control schedules may not be optimal for a deteriorated engine. This problem may be solved by including a measure of deterioration in determining the control variables. These engine deterioration parameters usually cannot be measured directly but can be estimated. A Kalman filter design is presented for estimating two performance parameters that account for engine deterioration: high and low pressure turbine delta efficiencies. The delta efficiency parameters model variations of the high and low pressure turbine efficiencies from nominal values. The filter has a design condition of Mach 0.90, 30,000 ft altitude, and 47 deg power level angle (PLA). It was evaluated using a nonlinear simulation of the F100 engine model derivative (EMD) engine, at the design Mach number and altitude over a PLA range of 43 to 55 deg. It was found that known high pressure turbine delta efficiencies of -2.5 percent and low pressure turbine delta efficiencies of -1.0 percent can be estimated with an accuracy of + or - 0.25 percent efficiency with a Kalman filter. If both the high and low pressure turbine are deteriorated, the delta efficiencies of -2.5 percent to both turbines can be estimated with the same accuracy.
NASA Technical Reports Server (NTRS)
Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire
1991-01-01
Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.
Visualizing deep neural network by alternately image blurring and deblurring.
Wang, Feng; Liu, Haijun; Cheng, Jian
2018-01-01
Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Waveform design for detection of weapons based on signature exploitation
NASA Astrophysics Data System (ADS)
Ahmad, Fauzia; Amin, Moeness G.; Dogaru, Traian
2010-04-01
We present waveform design based on signature exploitation techniques for improved detection of weapons in urban sensing applications. A single-antenna monostatic radar system is considered. Under the assumption of exact knowledge of the target orientation and, hence, known impulse response, matched illumination approach is used for optimal target detection. For the case of unknown target orientation, we analyze the target signatures as random processes and perform signal-to-noise-ratio based waveform optimization. Numerical electromagnetic modeling is used to provide the impulse responses of an AK-47 assault rifle for various target aspect angles relative to the radar. Simulation results depict an improvement in the signal-to-noise-ratio at the output of the matched filter receiver for both matched illumination and stochastic waveforms as compared to a chirp waveform of the same duration and energy.
Carmena, Jose M.
2016-01-01
Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sideris, Michael G.
2016-04-01
This study investigates the possibilities of local hydrology signal extraction using GRACE data and conventional filtering techniques. The impact of the basin shape has also been studied in order to derive empirical rules for tuning the GRACE filter parameters. GRACE CSR Release 05 monthly solutions were used from April 2002 to August 2015 (161 monthly solutions in total). SLR data were also used to replace the GRACE C2,0 coefficient, and a de-correlation filter with optimal parameters for CSR Release 05 data was applied to attenuate the correlation errors of monthly mass differences. For basins located at higher latitudes, the effect of Glacial Isostatic Adjustment (GIA) was taken into account using the ICE-6G model. The study focuses on three geometric properties, i.e., the area, the convexity and the width in the longitudinal direction, of 100 basins with global distribution. Two experiments have been performed. The first one deals with the determination of the Gaussian smoothing radius that minimizes the gaussianity of GRACE equivalent water height (EWH) over the selected basins. The EWH kurtosis was selected as a metric of gaussianity. The second experiment focuses on the derivation of the Gaussian smoothing radius that minimizes the RMS difference between GRACE data and a hydrology model. The GLDAS 1.0 Noah hydrology model was chosen, which shows good agreement with GRACE data according to previous studies. Early results show that there is an apparent relation between the geometric attributes of the basins examined and the Gaussian radius derived from the two experiments. The kurtosis analysis experiment tends to underestimate the optimal Gaussian radius, which is close to 200-300 km in many cases. Empirical rules for the selection of the Gaussian radius have been also developed for sub-regional scale basins.
A Self-Tuning Kalman Filter for Autonomous Spacecraft Navigation
NASA Technical Reports Server (NTRS)
Truong, Son H.
1998-01-01
Most navigation systems currently operated by NASA are ground-based, and require extensive support to produce accurate results. Recently developed systems that use Kalman Filter and Global Positioning System (GPS) data for orbit determination greatly reduce dependency on ground support, and have potential to provide significant economies for NASA spacecraft navigation. Current techniques of Kalman filtering, however, still rely on manual tuning from analysts, and cannot help in optimizing autonomy without compromising accuracy and performance. This paper presents an approach to produce a high accuracy autonomous navigation system fully integrated with the flight system. The resulting system performs real-time state estimation by using an Extended Kalman Filter (EKF) implemented with high-fidelity state dynamics model, as does the GPS Enhanced Orbit Determination Experiment (GEODE) system developed by the NASA Goddard Space Flight Center. Augmented to the EKF is a sophisticated neural-fuzzy system, which combines the explicit knowledge representation of fuzzy logic with the learning power of neural networks. The fuzzy-neural system performs most of the self-tuning capability and helps the navigation system recover from estimation errors. The core requirement is a method of state estimation that handles uncertainties robustly, capable of identifying estimation problems, flexible enough to make decisions and adjustments to recover from these problems, and compact enough to run on flight hardware. The resulting system can be extended to support geosynchronous spacecraft and high-eccentricity orbits. Mathematical methodology, systems and operations concepts, and implementation of a system prototype are presented in this paper. Results from the use of the prototype to evaluate optimal control algorithms implemented are discussed. Test data and major control issues (e.g., how to define specific roles for fuzzy logic to support the self-learning capability) are also discussed. In addition, architecture of a complete end-to-end candidate flight system that provides navigation with highly autonomous control using data from GPS is presented.
NASA Astrophysics Data System (ADS)
Shang, Zhen; Sui, Yun-Kang
2012-12-01
Based on the independent, continuous and mapping (ICM) method and homogenization method, a research model is constructed to propose and deduce a theorem and corollary from the invariant between the weight filter function and the corresponding stiffness filter function of the form of power function. The efficiency in searching for optimum solution will be raised via the choice of rational filter functions, so the above mentioned results are very important to the further study of structural topology optimization.
NASA Astrophysics Data System (ADS)
Terrien, Ryan C.
M dwarfs are the least massive and most common stars in the Galaxy. Due to their prevalence and long lifetimes, these diminutive stars play an outsize role in several fields of astronomical study. In particular, it is now known that they commonly host planetary systems, and may be the most common hosts of Earth-size, rocky planets in the habitable zone. A comprehensive understanding of M dwarfs is crucial for understanding the origins and conditions of their planetary systems, including their potential habitability. Such an understanding depends on methods for precisely and accurately measuring their properties. These tools have broader applicability as well, underlying the use of M dwarfs as fossils of Galactic evolution, and helping to constrain the structures and interiors of these stars. The measurement of the fundamental parameters of M dwarfs is encumbered by their spectral complexity. Unlike stars of spectral type F, G, or K that are similar to our G type Sun, whose spectra are dominated by continuum emission and atomic features, the cool atmospheres of M dwarfs are dominated by complex molecular absorption. Another challenge for studies of M dwarfs is that these stars are optically faint, emitting much of their radiation in the near-infrared (NIR). The availability and performance of NIR spectrographs have lagged behind those of optical spectrographs due to the challenges of producing low-noise, high-sensitivity NIR detector arrays, which have only recently become available. This thesis discusses two related lines of work that address these challenges, motivated by the development of the Habitable Zone Planet Finder (HPF), a NIR radial velocity (RV) spectrograph under development at Penn State that will search for and confirm planets around nearby M dwarfs. This work includes the development and application of new NIR spectroscopic techniques for characterizing M dwarfs, and the development and optimization of new NIR instrumentation for HPF. The first line of work is centered on a large NIR spectroscopic survey of nearby M dwarfs, undertaken to characterize potential targets for HPF. This survey, and new techniques for measuring M dwarf metallicity, are the subject of Chapter 2. These data will provide crucial information to assess planetary composition, and the stellar metallicities will help us understand the process of planet formation around M dwarfs. These techniques have also enabled strong tests of low-mass stellar models in the benchmark eclipsing binary system CM Draconis, and have helped identify potential directions for improvement in the models, as presented in Chapter 3. The development of new spectroscopic indices for measuring M dwarf luminosity, radius, and potentially alpha-element abundance is discussed in Chapter 4. Finally, Chapter 5 presents a synthesis of these M dwarf characterization techniques and radial velocity (RV) measurements from the SDSS-III APOGEE spectrograph, which we applied to confirm and characterize the first M dwarfs in the nearby Coma Berenices cluster. The second line of work relates to the optimization of HPF. By targeting M dwarfs, HPF will take advantage of the large signal induced by an Earth-mass planet orbiting an M dwarf compared to the same planet orbiting an FGK star. Chapter 6 discusses a number of design trades and parameter optimizations undertaken in order to ensure the best sensitivity to Earth-mass planets. These subtopics include the optimization of the HPF resolution, bandpass, operating temperature, and vacuum phase holographic cross-disperser, as well as prediction of anticipated HPF performance, and the development of an HPF software simulator tool. In carrying out NIR detector tests for HPF, we have also tested an optical filter that selectively blocks long-wavelength thermal background radiation. This type of contamination is a perennial source of noise for NIR instruments, and typically forces these instruments to operate fully cryogenically. The complexity and cost of this approach may be avoided: for instruments operating in the H-band or bluer, the thermal background can be optically filtered, freeing the instrument to operate at warmer temperatures. Chapter 7 details our characterization and application of an interference filter that effectively blocks thermal background when used with a 1.7mum-cutoff HAWAII-2RG NIR detector array. By effectively filtering the thermal background with a single coated optic, this filter offers the potential for simple, cost-effective, warm-pupil NIR astronomical instruments, which can take advantage of the increasing availability of low-noise, high-efficiency NIR detectors.
Validation of search filters for identifying pediatric studies in PubMed.
Leclercq, Edith; Leeflang, Mariska M G; van Dalen, Elvira C; Kremer, Leontien C M
2013-03-01
To identify and validate PubMed search filters for retrieving studies including children and to develop a new pediatric search filter for PubMed. We developed 2 different datasets of studies to evaluate the performance of the identified pediatric search filters, expressed in terms of sensitivity, precision, specificity, accuracy, and number needed to read (NNR). An optimal search filter will have a high sensitivity and high precision with a low NNR. In addition to the PubMed Limits: All Child: 0-18 years filter (in May 2012 renamed to PubMed Filter Child: 0-18 years), 6 search filters for identifying studies including children were identified: 3 developed by Kastner et al, 1 developed by BestBets, one by the Child Health Field, and 1 by the Cochrane Childhood Cancer Group. Three search filters (Cochrane Childhood Cancer Group, Child Health Field, and BestBets) had the highest sensitivity (99.3%, 99.5%, and 99.3%, respectively) but a lower precision (64.5%, 68.4%, and 66.6% respectively) compared with the other search filters. Two Kastner search filters had a high precision (93.0% and 93.7%, respectively) but a low sensitivity (58.5% and 44.8%, respectively). They failed to identify many pediatric studies in our datasets. The search terms responsible for false-positive results in the reference dataset were determined. With these data, we developed a new search filter for identifying studies with children in PubMed with an optimal sensitivity (99.5%) and precision (69.0%). Search filters to identify studies including children either have a low sensitivity or a low precision with a high NNR. A new pediatric search filter with a high sensitivity and a low NNR has been developed. Copyright © 2013 Mosby, Inc. All rights reserved.
Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon
2018-01-01
We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.