Sample records for optimized filter field

  1. Generalized filtering of laser fields in optimal control theory: application to symmetry filtering of quantum gate operations

    NASA Astrophysics Data System (ADS)

    Schröder, Markus; Brown, Alex

    2009-10-01

    We present a modified version of a previously published algorithm (Gollub et al 2008 Phys. Rev. Lett.101 073002) for obtaining an optimized laser field with more general restrictions on the search space of the optimal field. The modification leads to enforcement of the constraints on the optimal field while maintaining good convergence behaviour in most cases. We demonstrate the general applicability of the algorithm by imposing constraints on the temporal symmetry of the optimal fields. The temporal symmetry is used to reduce the number of transitions that have to be optimized for quantum gate operations that involve inversion (NOT gate) or partial inversion (Hadamard gate) of the qubits in a three-dimensional model of ammonia.

  2. Optimization of plasma parameters with magnetic filter field and pressure to maximize H{sup −} ion density in a negative hydrogen ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Won-Hwi; Dang, Jeong-Jeung; Kim, June Young

    2016-02-15

    Transverse magnetic filter field as well as operating pressure is considered to be an important control knob to enhance negative hydrogen ion production via plasma parameter optimization in volume-produced negative hydrogen ion sources. Stronger filter field to reduce electron temperature sufficiently in the extraction region is favorable, but generally known to be limited by electron density drop near the extraction region. In this study, unexpected electron density increase instead of density drop is observed in front of the extraction region when the applied transverse filter field increases monotonically toward the extraction aperture. Measurements of plasma parameters with a movable Langmuirmore » probe indicate that the increased electron density may be caused by low energy electron accumulation in the filter region decreasing perpendicular diffusion coefficients across the increasing filter field. Negative hydrogen ion populations are estimated from the measured profiles of electron temperatures and densities and confirmed to be consistent with laser photo-detachment measurements of the H{sup −} populations for various filter field strengths and pressures. Enhanced H{sup −} population near the extraction region due to the increased low energy electrons in the filter region may be utilized to increase negative hydrogen beam currents by moving the extraction position accordingly. This new finding can be used to design efficient H{sup −} sources with an optimal filtering system by maximizing high energy electron filtering while keeping low energy electrons available in the extraction region.« less

  3. Learning-based 3D surface optimization from medical image reconstruction

    NASA Astrophysics Data System (ADS)

    Wei, Mingqiang; Wang, Jun; Guo, Xianglin; Wu, Huisi; Xie, Haoran; Wang, Fu Lee; Qin, Jing

    2018-04-01

    Mesh optimization has been studied from the graphical point of view: It often focuses on 3D surfaces obtained by optical and laser scanners. This is despite the fact that isosurfaced meshes of medical image reconstruction suffer from both staircases and noise: Isotropic filters lead to shape distortion, while anisotropic ones maintain pseudo-features. We present a data-driven method for automatically removing these medical artifacts while not introducing additional ones. We consider mesh optimization as a combination of vertex filtering and facet filtering in two stages: Offline training and runtime optimization. In specific, we first detect staircases based on the scanning direction of CT/MRI scanners, and design a staircase-sensitive Laplacian filter (vertex-based) to remove them; and then design a unilateral filtered facet normal descriptor (uFND) for measuring the geometry features around each facet of a given mesh, and learn the regression functions from a set of medical meshes and their high-resolution reference counterparts for mapping the uFNDs to the facet normals of the reference meshes (facet-based). At runtime, we first perform staircase-sensitive Laplacian filter on an input MC (Marching Cubes) mesh, and then filter the mesh facet normal field using the learned regression functions, and finally deform it to match the new normal field for obtaining a compact approximation of the high-resolution reference model. Tests show that our algorithm achieves higher quality results than previous approaches regarding surface smoothness and surface accuracy.

  4. Charged particle tracking without magnetic field: Optimal measurement of track momentum by a Bayesian analysis of the multiple measurements of deflections due to multiple scattering

    NASA Astrophysics Data System (ADS)

    Frosini, Mikael; Bernard, Denis

    2017-09-01

    We revisit the precision of the measurement of track parameters (position, angle) with optimal methods in the presence of detector resolution, multiple scattering and zero magnetic field. We then obtain an optimal estimator of the track momentum by a Bayesian analysis of the filtering innovations of a series of Kalman filters applied to the track. This work could pave the way to the development of autonomous high-performance gas time-projection chambers (TPC) or silicon wafer γ-ray space telescopes and be a powerful guide in the optimization of the design of the multi-kilo-ton liquid argon TPCs that are under development for neutrino studies.

  5. Geomagnetic modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Estes, R. H.

    1981-01-01

    The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

  6. Simplification of the Kalman filter for meteorological data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1991-01-01

    The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.

  7. Cellular traction force recovery: An optimal filtering approach in two-dimensional Fourier space.

    PubMed

    Huang, Jianyong; Qin, Lei; Peng, Xiaoling; Zhu, Tao; Xiong, Chunyang; Zhang, Youyi; Fang, Jing

    2009-08-21

    Quantitative estimation of cellular traction has significant physiological and clinical implications. As an inverse problem, traction force recovery is essentially susceptible to noise in the measured displacement data. For traditional procedure of Fourier transform traction cytometry (FTTC), noise amplification is accompanied in the force reconstruction and small tractions cannot be recovered from the displacement field with low signal-noise ratio (SNR). To improve the FTTC process, we develop an optimal filtering scheme to suppress the noise in the force reconstruction procedure. In the framework of the Wiener filtering theory, four filtering parameters are introduced in two-dimensional Fourier space and their analytical expressions are derived in terms of the minimum-mean-squared-error (MMSE) optimization criterion. The optimal filtering approach is validated with simulations and experimental data associated with the adhesion of single cardiac myocyte to elastic substrate. The results indicate that the proposed method can highly enhance SNR of the recovered forces to reveal tiny tractions in cell-substrate interaction.

  8. Progress in navigation filter estimate fusion and its application to spacecraft rendezvous

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1994-01-01

    A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.

  9. Heat source reconstruction from noisy temperature fields using a gradient anisotropic diffusion filter

    NASA Astrophysics Data System (ADS)

    Beitone, C.; Balandraud, X.; Delpueyo, D.; Grédiac, M.

    2017-01-01

    This paper presents a post-processing technique for noisy temperature maps based on a gradient anisotropic diffusion (GAD) filter in the context of heat source reconstruction. The aim is to reconstruct heat source maps from temperature maps measured using infrared (IR) thermography. Synthetic temperature fields corrupted by added noise are first considered. The GAD filter, which relies on a diffusion process, is optimized to retrieve as well as possible a heat source concentration in a two-dimensional plate. The influence of the dimensions and the intensity of the heat source concentration are discussed. The results obtained are also compared with two other types of filters: averaging filter and Gaussian derivative filter. The second part of this study presents an application for experimental temperature maps measured with an IR camera. The results demonstrate the relevancy of the GAD filter in extracting heat sources from noisy temperature fields.

  10. Noise removal in extended depth of field microscope images through nonlinear signal processing.

    PubMed

    Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J

    2013-04-01

    Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.

  11. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method.

    PubMed

    Deng, Yongbo; Korvink, Jan G

    2016-05-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.

  12. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method

    PubMed Central

    Korvink, Jan G.

    2016-01-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable. PMID:27279766

  13. Model-based optimization of near-field binary-pixelated beam shapers

    DOE PAGES

    Dorrer, C.; Hassett, J.

    2017-01-23

    The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less

  14. Filter Function for Wavefront Sensing Over a Field of View

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A filter function has been derived as a means of optimally weighting the wavefront estimates obtained in image-based phase retrieval performed at multiple points distributed over the field of view of a telescope or other optical system. When the data obtained in wavefront sensing and, more specifically, image-based phase retrieval, are used for controlling the shape of a deformable mirror or other optic used to correct the wavefront, the control law obtained by use of the filter function gives a more balanced optical performance over the field of view than does a wavefront-control law obtained by use of a wavefront estimate obtained from a single point in the field of view.

  15. Robotic fish tracking method based on suboptimal interval Kalman filter

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohong; Tang, Chao

    2017-11-01

    Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.

  16. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  17. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  18. Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared spectroscopy

    PubMed Central

    Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.

    2011-01-01

    An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (≤ 32 cm−1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm−1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

  19. Validation of search filters for identifying pediatric studies in PubMed.

    PubMed

    Leclercq, Edith; Leeflang, Mariska M G; van Dalen, Elvira C; Kremer, Leontien C M

    2013-03-01

    To identify and validate PubMed search filters for retrieving studies including children and to develop a new pediatric search filter for PubMed. We developed 2 different datasets of studies to evaluate the performance of the identified pediatric search filters, expressed in terms of sensitivity, precision, specificity, accuracy, and number needed to read (NNR). An optimal search filter will have a high sensitivity and high precision with a low NNR. In addition to the PubMed Limits: All Child: 0-18 years filter (in May 2012 renamed to PubMed Filter Child: 0-18 years), 6 search filters for identifying studies including children were identified: 3 developed by Kastner et al, 1 developed by BestBets, one by the Child Health Field, and 1 by the Cochrane Childhood Cancer Group. Three search filters (Cochrane Childhood Cancer Group, Child Health Field, and BestBets) had the highest sensitivity (99.3%, 99.5%, and 99.3%, respectively) but a lower precision (64.5%, 68.4%, and 66.6% respectively) compared with the other search filters. Two Kastner search filters had a high precision (93.0% and 93.7%, respectively) but a low sensitivity (58.5% and 44.8%, respectively). They failed to identify many pediatric studies in our datasets. The search terms responsible for false-positive results in the reference dataset were determined. With these data, we developed a new search filter for identifying studies with children in PubMed with an optimal sensitivity (99.5%) and precision (69.0%). Search filters to identify studies including children either have a low sensitivity or a low precision with a high NNR. A new pediatric search filter with a high sensitivity and a low NNR has been developed. Copyright © 2013 Mosby, Inc. All rights reserved.

  20. Reflective Filters Design for Self-Filtering Narrowband Ultraviolet Imaging Experiment Wide-Field Surveys (NUVIEWS) Project

    NASA Technical Reports Server (NTRS)

    Park, Jung- Ho; Kim, Jongmin; Zukic, Muamer; Torr, Douglas G.

    1994-01-01

    We report the design of multilayer reflective filters for the self-filtering cameras of the NUVIEWS project. Wide angle self-filtering cameras were designed to image the C IV (154.9 nm) line emission, and H2 Lyman band fluorescence (centered at 161 nm) over a 20 deg x 30 deg field of view. A key element of the filter design includes the development of pi-multilayers optimized to provide maximum reflectance at 154.9 nm and 161 nm for the respective cameras without significant spectral sensitivity to the large cone angle of the incident radiation. We applied self-filtering concepts to design NUVIEWS telescope filters that are composed of three reflective mirrors and one folding mirror. The filters with narrowband widths of 6 and 8 rim at 154.9 and 161 nm, respectively, have net throughputs of more than 50 % with average blocking of out-of-band wavelengths better than 3 x 10(exp -4)%.

  1. Linear Quantum Systems: Non-Classical States and Robust Stability

    DTIC Science & Technology

    2016-06-29

    quantum linear systems subject to non-classical quantum fields. The major outcomes of this project are (i) derivation of quantum filtering equations for...derivation of quantum filtering equations for systems non-classical input states including single photon states, (ii) determination of how linear...history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control

  2. Spatio-Temporal Field Estimation Using Kriged Kalman Filter (KKF) with Sparsity-Enforcing Sensor Placement.

    PubMed

    Roy, Venkat; Simonetto, Andrea; Leus, Geert

    2018-06-01

    We propose a sensor placement method for spatio-temporal field estimation based on a kriged Kalman filter (KKF) using a network of static or mobile sensors. The developed framework dynamically designs the optimal constellation to place the sensors. We combine the estimation error (for the stationary as well as non-stationary component of the field) minimization problem with a sparsity-enforcing penalty to design the optimal sensor constellation in an economic manner. The developed sensor placement method can be directly used for a general class of covariance matrices (ill-conditioned or well-conditioned) modelling the spatial variability of the stationary component of the field, which acts as a correlated observation noise, while estimating the non-stationary component of the field. Finally, a KKF estimator is used to estimate the field using the measurements from the selected sensing locations. Numerical results are provided to exhibit the feasibility of the proposed dynamic sensor placement followed by the KKF estimation method.

  3. Analysis of multidimensional difference-of-Gaussians filters in terms of directly observable parameters.

    PubMed

    Cope, Davis; Blakeslee, Barbara; McCourt, Mark E

    2013-05-01

    The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.

  4. How to find and type red/brown dwarf stars in near-infrared imaging space observatories

    NASA Astrophysics Data System (ADS)

    Willemn Holwerda, Benne; Ryan, Russell; Bridge, Joanna; Pirzkal, Nor; Kenworthy, Matthew; Andersen, Morten; Wilkins, Stephen; Trenti, Michele; Meshkat, Tiffany; Bernard, Stephanie; Smit, Renske

    2018-01-01

    Here we evaluate the near-infrared colors of brown dwarfs as observed with four major infrared imaging space observatories: the Hubble Space Telescope (HST), the James Webb Space Telescope (JWST), the EUCLID mission, and the WFIRST telescope. We use the splat ISPEX spectroscopic library to map out the colors of the M, L, and T-type brown dwarfs. We identify which color-color combination is optimal for identifying broad type and which single color is optimal to then identify the subtype (e.g., T0-9). We evaluate each observatory separately as well as the the narrow-field (HST and JWST) and wide-field (EULID and WFIRST) combinations.HST filters used thus far for high-redshift searches (e.g. CANDELS and BoRG) are close to optimal within the available filter combinations. A clear improvement over HST is one of two broad/medium filter combinations on JWST: pairing F140M with either F150W or F162M discriminates well between brown dwarf subtypes. The improvement of JWST the filter set over the HST one is so marked that any combination of HST and JWST filters does not improve the classification.The EUCLID filter set alone performs poorly in terms of typing brown dwarfs and WFIRST performs only marginally better, despite a wider selection of filters. A combined EUCLID and WFIRST observation, using WFIRST's W146 and F062 and EUCLID's Y-band, allows for a much better discrimination between broad brown dwarf categories. In this respect, WFIRST acts as a targeted follow-up observatory for the all-sky EUCLID survey. However, subsequent subtyping with the combination of EUCLID and WFIRST observations remains uncertain due to the lack of medium or narrow-band filters in this wavelength range. We argue that a medium band added to the WFIRST filter selection would greatly improve its ability to preselect against brown dwarfs in high-latitude surveys.

  5. Fault tolerant filtering and fault detection for quantum systems driven by fields in single photon states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Qing, E-mail: qing.gao.chance@gmail.com; Dong, Daoyi, E-mail: daoyidong@gmail.com; Petersen, Ian R., E-mail: i.r.petersen@gmai.com

    The purpose of this paper is to solve the fault tolerant filtering and fault detection problem for a class of open quantum systems driven by a continuous-mode bosonic input field in single photon states when the systems are subject to stochastic faults. Optimal estimates of both the system observables and the fault process are simultaneously calculated and characterized by a set of coupled recursive quantum stochastic differential equations.

  6. Combined adaptive multiple subtraction based on optimized event tracing and extended wiener filtering

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo

    2017-06-01

    The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.

  7. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

  8. Optimal filter design with progressive genetic algorithm for local damage detection in rolling bearings

    NASA Astrophysics Data System (ADS)

    Wodecki, Jacek; Michalak, Anna; Zimroz, Radoslaw

    2018-03-01

    Harsh industrial conditions present in underground mining cause a lot of difficulties for local damage detection in heavy-duty machinery. For vibration signals one of the most intuitive approaches of obtaining signal with expected properties, such as clearly visible informative features, is prefiltration with appropriately prepared filter. Design of such filter is very broad field of research on its own. In this paper authors propose a novel approach to dedicated optimal filter design using progressive genetic algorithm. Presented method is fully data-driven and requires no prior knowledge of the signal. It has been tested against a set of real and simulated data. Effectiveness of operation has been proven for both healthy and damaged case. Termination criterion for evolution process was developed, and diagnostic decision making feature has been proposed for final result determinance.

  9. Determination of tailored filter sets to create rayfiles including spatial and angular resolved spectral information.

    PubMed

    Rotscholl, Ingo; Trampert, Klaus; Krüger, Udo; Perner, Martin; Schmidt, Franz; Neumann, Cornelius

    2015-11-16

    To simulate and optimize optical designs regarding perceived color and homogeneity in commercial ray tracing software, realistic light source models are needed. Spectral rayfiles provide angular and spatial varying spectral information. We propose a spectral reconstruction method with a minimum of time consuming goniophotometric near field measurements with optical filters for the purpose of creating spectral rayfiles. Our discussion focuses on the selection of the ideal optical filter combination for any arbitrary spectrum out of a given filter set by considering measurement uncertainties with Monte Carlo simulations. We minimize the simulation time by a preselection of all filter combinations, which bases on factorial design.

  10. The IMPACT Common Module - A Low Cost, Reconfigurable Building Block for Next Generation Phased Arrays

    DTIC Science & Technology

    2016-03-31

    The SiGe receiver has two stages of programmable RF filtering and one stage of IF filtering. Each filter can be tuned in center frequency and...distribution unlimited. transmit, with an IF to RF upconversion chain that is split to programmable phase shifters and VGAs at each output port. Figure 2...These are optimized to run on medium grade Field Programmable Gate Arrays (FPGAs), such as the Altera Arria 10, and represent a few of the many

  11. Epi-Fluorescence Microscopy

    PubMed Central

    Webb, Donna J.; Brown, Claire M.

    2012-01-01

    Epi-fluorescence microscopy is available in most life sciences research laboratories, and when optimized can be a central laboratory tool. In this chapter, the epi-fluorescence light path is introduced and the various components are discussed in detail. Recommendations are made for incident lamp light sources, excitation and emission filters, dichroic mirrors, objective lenses, and charge-coupled device (CCD) cameras in order to obtain the most sensitive epi-fluorescence microscope. The even illumination of metal-halide lamps combined with new “hard” coated filters and mirrors, a high resolution monochrome CCD camera, and a high NA objective lens are all recommended for high resolution and high sensitivity fluorescence imaging. Recommendations are also made for multicolor imaging with the use of monochrome cameras, motorized filter turrets, individual filter cubes, and corresponding dyes that are the best choice for sensitive, high resolution multicolor imaging. Images should be collected using Nyquist sampling and should be corrected for background intensity contributions and nonuniform illumination across the field of view. Photostable fluorescent probes and proteins that absorb a lot of light (i.e., high extinction co-efficients) and generate a lot of fluorescence signal (i.e., high quantum yields) are optimal. A neuronal immune-fluorescence labeling protocol is also presented. Finally, in order to maximize the utility of sensitive wide-field microscopes and generate the highest resolution images with high signal-to-noise, advice for combining wide-field epi-fluorescence imaging with restorative image deconvolution is presented. PMID:23026996

  12. Ridge filter design and optimization for the broad-beam three-dimensional irradiation system for heavy-ion radiotherapy.

    PubMed

    Schaffner, B; Kanai, T; Futami, Y; Shimbo, M; Urakabe, E

    2000-04-01

    The broad-beam three-dimensional irradiation system under development at National Institute of Radiological Sciences (NIRS) requires a small ridge filter to spread the initially monoenergetic heavy-ion beam to a small spread-out Bragg peak (SOBP). A large SOBP covering the target volume is then achieved by a superposition of differently weighted and displaced small SOBPs. Two approaches were studied for the definition of a suitable ridge filter and experimental verifications were performed. Both approaches show a good agreement between the calculated and measured dose and lead to a good homogeneity of the biological dose in the target. However, the ridge filter design that produces a Gaussian-shaped spectrum of the particle ranges was found to be more robust to small errors and uncertainties in the beam application. Furthermore, an optimization procedure for two fields was applied to compensate for the missing dose from the fragmentation tail for the case of a simple-geometry target. The optimized biological dose distributions show that a very good homogeneity is achievable in the target.

  13. Kalman filter based control for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry

    2004-12-01

    Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.

  14. Geomagnetic field modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Data sets selected for mini-batches and the software modifications required for processing these sets are described. Initial analysis was performed on minibatch field model recovery. Studies are being performed to examine the convergence of the solutions and the maximum expansion order the data will support in the constant and secular terms.

  15. Texture classification using autoregressive filtering

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.; Lee, M.

    1984-01-01

    A general theory of image texture models is proposed and its applicability to the problem of scene segmentation using texture classification is discussed. An algorithm, based on half-plane autoregressive filtering, which optimally utilizes second order statistics to discriminate between texture classes represented by arbitrary wide sense stationary random fields is described. Empirical results of applying this algorithm to natural and sysnthesized scenes are presented and future research is outlined.

  16. Design issues for directional coupler- and MMI-based optical microring resonator filters on InP

    NASA Astrophysics Data System (ADS)

    Themistos, Christos; Kalli, Kyriacos; Komodromos, Michalis; Rajarajan, Muttukrishnan; Rahman, B. M. A.; Grattan, Kenneth T. V.

    2004-08-01

    The characterization and optimization of optical microring resonator-based optical filters on deeply etched GaInAsP-Inp waveguides, using the finite element-based beam propagation approach is presented here. Design issues for directional coupler- and multimode interference coupler-based devices, such as field evolution, optical power, phase, fabrication tolerance and wavelength dependence have been investigated.

  17. On optimal infinite impulse response edge detection filters

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1991-01-01

    The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.

  18. A Machine-Learning and Filtering Based Data Assimilation Framework for Geologic Carbon Sequestration Monitoring Optimization

    NASA Astrophysics Data System (ADS)

    Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.

    2017-12-01

    Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.

  19. Phage-based biomolecular filter for the capture of bacterial pathogens in liquid streams

    NASA Astrophysics Data System (ADS)

    Du, Songtao; Chen, I.-Hsuan; Horikawa, Shin; Lu, Xu; Liu, Yuzhe; Wikle, Howard C.; Suh, Sang Jin; Chin, Bryan A.

    2017-05-01

    This paper investigates a phage-based biomolecular filter that enables the evaluation of large volumes of liquids for the presence of small quantities of bacterial pathogens. The filter is a planar arrangement of phage-coated, strip-shaped magnetoelastic (ME) biosensors (4 mm × 0.8 mm × 0.03 mm), magnetically coupled to a filter frame structure, through which a liquid of interest flows. This "phage filter" is designed to capture specific bacterial pathogens and allow non-specific debris to pass, eliminating the common clogging issue in conventional bead filters. ANSYS Maxwell was used to simulate the magnetic field pattern required to hold ME biosensors densely and to optimize the frame design. Based on the simulation results, a phage filter structure was constructed, and a proof-in-concept experiment was conducted where a Salmonella solution of known concentration were passed through the filter, and the number of captured Salmonella was quantified by plate counting.

  20. A Narrow-Linewidth Atomic Line Filter for Free Space Quantum Key Distribution under Daytime Atmospheric Conditions

    NASA Astrophysics Data System (ADS)

    Brown, Justin; Woolf, David; Hensley, Joel

    2016-05-01

    Quantum key distribution can provide secure optical data links using the established BB84 protocol, though solar backgrounds severely limit the performance through free space. Several approaches to reduce the solar background include time-gating the photon signal, limiting the field of view through geometrical design of the optical system, and spectral rejection using interference filters. Despite optimization of these parameters, the solar background continues to dominate under daytime atmospheric conditions. We demonstrate an improved spectral filter by replacing the interference filter (Δν ~ 50 GHz) with an atomic line filter (Δν ~ 1 GHz) based on optical rotation of linearly polarized light through a warm Rb vapor. By controlling the magnetic field and the optical depth of the vapor, a spectrally narrow region can be transmitted between crossed polarizers. We find that the transmission is more complex than a single peak and evaluate peak transmission as well as a ratio of peak transmission to average transmission of the local spectrum. We compare filters containing a natural abundance of Rb with those containing isotopically pure 87 Rb and 85 Rb. A filter providing > 95 % transmission and Δν ~ 1.1 GHz is achieved.

  1. A flexible curvilinear electromagnetic filter for direct current cathodic arc source.

    PubMed

    Dai, Hua; Shen, Yao; Li, Liuhe; Li, Xiaoling; Cai, Xun; Chu, Paul K

    2007-09-01

    Widespread applications of direct current (dc) cathodic arc deposition are hampered by macroparticle (MP) contamination, although a cathodic arc offers many unique merits such as high ionization rate, high deposition rate, etc. In this work, a flexible curvilinear electromagnetic filter is described to eliminate MPs from a dc cathodic arc source. The filter which has a relatively large size with a minor radius of about 85 mm is suitable for large cathodes. The filter is open and so the MPs do not rebound inside the filter. The flexible design allows the ions to be transported from the cathode to the sample surface optimally. Our measurements with a saturated ion current probe show that the efficiency of this flexible filter reaches about 2.0% (aluminum cathode) when the filter current is about 250 A. The MP density measured from TiN films deposited using this filter is two to three orders of magnitude less than that from films deposited with a 90 degrees duct magnetic filter and three to four orders of magnitude smaller than those deposited without a filter. Furthermore, our experiments reveal that the potential of the filter coil and the magnetic field on the surface of the cathode are two important factors affecting the efficacy of the filter. Different biasing potentials can enhance the efficiency to up to 12-fold, and a magnetic field at about 4.0 mT can improve it by a factor of 2 compared to 5.4 mT.

  2. The effect of spectral filters on visual search in stroke patients.

    PubMed

    Beasley, Ian G; Davies, Leon N

    2013-01-01

    Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted.

  3. Variational optimization analysis of temperature and moisture advection in a severe storm environment

    NASA Technical Reports Server (NTRS)

    Mcfarland, M. J.

    1975-01-01

    Horizontal wind components, potential temperature, and mixing ratio fields associated with a severe storm environment in the south central U.S. were analyzed from synoptic upper air observations with a nonhomogeneous, anisotropic weighting function. Each data field was filtered with variational optimization analysis techniques. Variational optimization analysis was also performed on the vertical motion field and was used to produce advective forecasts of the potential temperature and mixing ratio fields. Results show that the dry intrusion is characterized by warm air, the advection of which produces a well-defined upward motion pattern. A corresponding downward motion pattern comprising a deep vertical circulation in the warm air sector of the low pressure system was detected. The axes alignment of maximum dry and warm advection with the axis of the tornado-producing squall line also resulted.

  4. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  5. System optimization of a field-widened Michelson interferometric spectral filter for high spectral resolution lidar

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Miller, Ian; Hostetler, Chris; Cook, Anthony; Hair, Johnathan

    2011-06-01

    High spectral resolution lidars (HSRLs) have recently shown great value in aerosol measurements form aircraft and are being called for in future space-based aerosol remote sensing applications. A quasi-monolithic field-widened, off-axis Michelson interferometer had been developed as the spectral discrimination filter for an HSRL currently under development at NASA Langley Research Center (LaRC). The Michelson filter consists of a cubic beam splitter, a solid arm and an air arm. The input light is injected at 1.5° off-axis to provide two output channels: standard Michelson output and the reflected complementary signal. Piezo packs connect the air arm mirror to the main part of the filter that allows it to be tuned within a small range. In this paper, analyses of the throughput wavephase, locking error, AR coating, and tilt angle of the interferometer are described. The transmission ratio for monochromatic light at the transmitted wavelength is used as a figure of merit for assessing each of these parameters.

  6. Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction.

    PubMed

    Muruganantham, Arrchana; Tan, Kay Chen; Vadakkepat, Prahlad

    2016-12-01

    Evolutionary algorithms are effective in solving static multiobjective optimization problems resulting in the emergence of a number of state-of-the-art multiobjective evolutionary algorithms (MOEAs). Nevertheless, the interest in applying them to solve dynamic multiobjective optimization problems has only been tepid. Benchmark problems, appropriate performance metrics, as well as efficient algorithms are required to further the research in this field. One or more objectives may change with time in dynamic optimization problems. The optimization algorithm must be able to track the moving optima efficiently. A prediction model can learn the patterns from past experience and predict future changes. In this paper, a new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems. The predictions help to guide the search toward the changed optima, thereby accelerating convergence. A scoring scheme is devised to hybridize the KF prediction with a random reinitialization method. Experimental results and performance comparisons with other state-of-the-art algorithms demonstrate that the proposed algorithm is capable of significantly improving the dynamic optimization performance.

  7. A Wide Field of View Plasma Spectrometer

    DOE PAGES

    Skoug, Ruth M.; Funsten, Herbert O.; Moebius, Eberhard; ...

    2016-07-01

    Here we present a fundamentally new type of space plasma spectrometer, the wide field of view plasma spectrometer, whose field of view is >1.25π ster using fewer resources than traditional methods. The enabling component is analogous to a pinhole camera with an electrostatic energy-angle filter at the image plane. Particle energy-per-charge is selected with a tunable bias voltage applied to the filter plate relative to the pinhole aperture plate. For a given bias voltage, charged particles from different directions are focused by different angles to different locations. Particles with appropriate locations and angles can transit the filter plate and aremore » measured using a microchannel plate detector with a position-sensitive anode. Full energy and angle coverage are obtained using a single high-voltage power supply, resulting in considerable resource savings and allowing measurements at fast timescales. Lastly, we present laboratory prototype measurements and simulations demonstrating the instrument concept and discuss optimizations of the instrument design for application to space measurements.« less

  8. Inverse design of high-Q wave filters in two-dimensional phononic crystals by topology optimization.

    PubMed

    Dong, Hao-Wen; Wang, Yue-Sheng; Zhang, Chuanzeng

    2017-04-01

    Topology optimization of a waveguide-cavity structure in phononic crystals for designing narrow band filters under the given operating frequencies is presented in this paper. We show that it is possible to obtain an ultra-high-Q filter by only optimizing the cavity topology without introducing any other coupling medium. The optimized cavity with highly symmetric resonance can be utilized as the multi-channel filter, raising filter and T-splitter. In addition, most optimized high-Q filters have the Fano resonances near the resonant frequencies. Furthermore, our filter optimization based on the waveguide and cavity, and our simple illustration of a computational approach to wave control in phononic crystals can be extended and applied to design other acoustic devices or even opto-mechanical devices. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Effect of hyperfine-induced spin mixing on the defect-enabled spin blockade and spin filtering in GaNAs

    NASA Astrophysics Data System (ADS)

    Puttisong, Y.; Wang, X. J.; Buyanova, I. A.; Chen, W. M.

    2013-03-01

    The effect of hyperfine interaction (HFI) on the recently discovered room-temperature defect-enabled spin-filtering effect in GaNAs alloys is investigated both experimentally and theoretically based on a spin Hamiltonian analysis. We provide direct experimental evidence that the HFI between the electron and nuclear spin of the central Ga atom of the spin-filtering defect, namely, the Gai interstitials, causes strong mixing of the electron spin states of the defect, thereby degrading the efficiency of the spin-filtering effect. We also show that the HFI-induced spin mixing can be suppressed by an application of a longitudinal magnetic field such that the electronic Zeeman interaction overcomes the HFI, leading to well-defined electron spin states beneficial to the spin-filtering effect. The results provide a guideline for further optimization of the defect-engineered spin-filtering effect.

  10. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    PubMed

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  11. Are consistent equal-weight particle filters possible?

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2017-12-01

    Particle filters are fully nonlinear data-assimilation methods that could potentially change the way we do data-assimilation in highly nonlinear high-dimensional geophysical systems. However, the standard particle filter in which the observations come in by changing the relative weights of the particles is degenerate. This means that one particle obtains weight one, and all other particles obtain a very small weight, effectively meaning that the ensemble of particles reduces to that one particle. For over 10 years now scientists have searched for solutions to this problem. One obvious solution seems to be localisation, in which each part of the state only sees a limited number of observations. However, for a realistic localisation radius based on physical arguments, the number of observations is typically too large, and the filter is still degenerate. Another route taken is trying to find proposal densities that lead to more similar particle weights. There is a simple proof, however, that shows that there is an optimum, the so-called optimal proposal density, and that optimum will lead to a degenerate filter. On the other hand, it is easy to come up with a counter example of a particle filter that is not degenerate in high-dimensional systems. Furthermore, several particle filters have been developed recently that claim to have equal or equivalent weights. In this presentation I will show how to construct a particle filter that is never degenerate in high-dimensional systems, and how that is still consistent with the proof that one cannot do better than the optimal proposal density. Furthermore, it will be shown how equal- and equivalent-weights particle filters fit within this framework. This insight will then lead to new ways to generate particle filters that are non-degenerate, opening up the field of nonlinear filtering in high-dimensional systems.

  12. Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding

    NASA Astrophysics Data System (ADS)

    Susemihl, Alex; Meir, Ron; Opper, Manfred

    2013-03-01

    Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.

  13. Underwater single beam circumferentially scanning detection system using range-gated receiver and adaptive filter

    NASA Astrophysics Data System (ADS)

    Tan, Yayun; Zhang, He; Zha, Bingting

    2017-09-01

    Underwater target detection and ranging in seawater are of interest in unmanned underwater vehicles. This study presents an underwater detection system that synchronously scans a collimated laser beam and a narrow field of view to circumferentially detect an underwater target. Hybrid methods of range-gated and variable step-size least mean squares (VSS-LMS) adaptive filter are proposed to suppress water backscattering. The range-gated receiver eliminates the backscattering of near-field water. The VSS-LMS filter extracts the target echo in the remaining backscattering and the constant fraction discriminator timing method is used to improve ranging accuracy. The optimal constant fraction is selected by analysing the jitter noise and slope of the target echo. The prototype of the underwater detection system is constructed and tested in coastal seawater, then the effectiveness of backscattering suppression and high-ranging accuracy is verified through experimental results and analysis discussed in this paper.

  14. Visual environment recognition for robot path planning using template matched filters

    NASA Astrophysics Data System (ADS)

    Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto

    2017-08-01

    A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.

  15. Rod-filter-field optimization of the J-PARC RF-driven H{sup −} ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueno, A., E-mail: akira.ueno@j-parc.jp; Ohkoshi, K.; Ikegami, K.

    2015-04-08

    In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H{sup −} ion beam of 60mA within normalized emittances of 1.5πmm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500μs×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H{sup −} ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H{sup −} ion source with the internal-antenna, the procedure to optimize it is not established. Inmore » order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM’s gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H{sup −} ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM’s cross-section (magnetmotive force) was indispensable for easy operation with the temperature (T{sub PE}) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM’s cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for T{sub PE} around 60°C.« less

  16. Convis: A Toolbox to Fit and Simulate Filter-Based Models of Early Visual Processing

    PubMed Central

    Huth, Jacob; Masquelier, Timothée; Arleo, Angelo

    2018-01-01

    We developed Convis, a Python simulation toolbox for large scale neural populations which offers arbitrary receptive fields by 3D convolutions executed on a graphics card. The resulting software proves to be flexible and easily extensible in Python, while building on the PyTorch library (The Pytorch Project, 2017), which was previously used successfully in deep learning applications, for just-in-time optimization and compilation of the model onto CPU or GPU architectures. An alternative implementation based on Theano (Theano Development Team, 2016) is also available, although not fully supported. Through automatic differentiation, any parameter of a specified model can be optimized to approach a desired output which is a significant improvement over e.g., Monte Carlo or particle optimizations without gradients. We show that a number of models including even complex non-linearities such as contrast gain control and spiking mechanisms can be implemented easily. We show in this paper that we can in particular recreate the simulation results of a popular retina simulation software VirtualRetina (Wohrer and Kornprobst, 2009), with the added benefit of providing (1) arbitrary linear filters instead of the product of Gaussian and exponential filters and (2) optimization routines utilizing the gradients of the model. We demonstrate the utility of 3d convolution filters with a simple direction selective filter. Also we show that it is possible to optimize the input for a certain goal, rather than the parameters, which can aid the design of experiments as well as closed-loop online stimulus generation. Yet, Convis is more than a retina simulator. For instance it can also predict the response of V1 orientation selective cells. Convis is open source under the GPL-3.0 license and available from https://github.com/jahuth/convis/ with documentation at https://jahuth.github.io/convis/. PMID:29563867

  17. Experimental evidence of the spatial coherence moiré and the filtering of classes of radiator pairs.

    PubMed

    Castaneda, Roman; Usuga-Castaneda, Mario; Herrera-Ramírez, Jorge

    2007-08-01

    Evidence of the physical existence of the spatial coherence moiré is obtained by confronting numerical results with experimental results of spatially partial interference. Although it was performed for two particular cases, the results reveal a general behavior of the optical fields in any state of spatial coherence. Moreover, the study of the spatial coherence moiré deals with a new type of filtering, named filtering of classes of radiator pairs, which allows changing the power spectrum at the observation plane by modulating the complex degree of spatial coherence, without altering the power distribution at the aperture plane or introducing conventional spatial filters. This new procedure can optimize some technological applications of actual interest, as the beam shaping for instance.

  18. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  19. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J [Erie, CO; Kapteyn, Henry C [Boulder, CO

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  20. Adaptive Estimation of Multiple Fading Factors for GPS/INS Integrated Navigation Systems.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2017-06-01

    The Kalman filter has been widely applied in the field of dynamic navigation and positioning. However, its performance will be degraded in the presence of significant model errors and uncertain interferences. In the literature, the fading filter was proposed to control the influences of the model errors, and the H-infinity filter can be adopted to address the uncertainties by minimizing the estimation error in the worst case. In this paper, a new multiple fading factor, suitable for the Global Positioning System (GPS) and the Inertial Navigation System (INS) integrated navigation system, is proposed based on the optimization of the filter, and a comprehensive filtering algorithm is constructed by integrating the advantages of the H-infinity filter and the proposed multiple fading filter. Measurement data of the GPS/INS integrated navigation system are collected under actual conditions. Stability and robustness of the proposed filtering algorithm are tested with various experiments and contrastive analysis are performed with the measurement data. Results demonstrate that both the filter divergence and the influences of outliers are restrained effectively with the proposed filtering algorithm, and precision of the filtering results are improved simultaneously.

  1. Optimizing Filter-Probe Diffusion Weighting in the Rat Spinal Cord for Human Translation

    PubMed Central

    Budde, Matthew D.; Skinner, Nathan P.; Muftuler, L. Tugan; Schmit, Brian D.; Kurpad, Shekar N.

    2017-01-01

    Diffusion tensor imaging (DTI) is a promising biomarker of spinal cord injury (SCI). In the acute aftermath, DTI in SCI animal models consistently demonstrates high sensitivity and prognostic performance, yet translation of DTI to acute human SCI has been limited. In addition to technical challenges, interpretation of the resulting metrics is ambiguous, with contributions in the acute setting from both axonal injury and edema. Novel diffusion MRI acquisition strategies such as double diffusion encoding (DDE) have recently enabled detection of features not available with DTI or similar methods. In this work, we perform a systematic optimization of DDE using simulations and an in vivo rat model of SCI and subsequently implement the protocol to the healthy human spinal cord. First, two complementary DDE approaches were evaluated using an orientationally invariant or a filter-probe diffusion encoding approach. While the two methods were similar in their ability to detect acute SCI, the filter-probe DDE approach had greater predictive power for functional outcomes. Next, the filter-probe DDE was compared to an analogous single diffusion encoding (SDE) approach, with the results indicating that in the spinal cord, SDE provides similar contrast with improved signal to noise. In the SCI rat model, the filter-probe SDE scheme was coupled with a reduced field of view (rFOV) excitation, and the results demonstrate high quality maps of the spinal cord without contamination from edema and cerebrospinal fluid, thereby providing high sensitivity to injury severity. The optimized protocol was demonstrated in the healthy human spinal cord using the commercially-available diffusion MRI sequence with modifications only to the diffusion encoding directions. Maps of axial diffusivity devoid of CSF partial volume effects were obtained in a clinically feasible imaging time with a straightforward analysis and variability comparable to axial diffusivity derived from DTI. Overall, the results and optimizations describe a protocol that mitigates several difficulties with DTI of the spinal cord. Detection of acute axonal damage in the injured or diseased spinal cord will benefit the optimized filter-probe diffusion MRI protocol outlined here. PMID:29311786

  2. Bayesian Regression with Network Prior: Optimal Bayesian Filtering Perspective

    PubMed Central

    Qian, Xiaoning; Dougherty, Edward R.

    2017-01-01

    The recently introduced intrinsically Bayesian robust filter (IBRF) provides fully optimal filtering relative to a prior distribution over an uncertainty class ofjoint random process models, whereas formerly the theory was limited to model-constrained Bayesian robust filters, for which optimization was limited to the filters that are optimal for models in the uncertainty class. This paper extends the IBRF theory to the situation where there are both a prior on the uncertainty class and sample data. The result is optimal Bayesian filtering (OBF), where optimality is relative to the posterior distribution derived from the prior and the data. The IBRF theories for effective characteristics and canonical expansions extend to the OBF setting. A salient focus of the present work is to demonstrate the advantages of Bayesian regression within the OBF setting over the classical Bayesian approach in the context otlinear Gaussian models. PMID:28824268

  3. Quasi Eighth-Mode Substrate Integrated Waveguide (SIW) Fractal Resonator Filter Utilizing Gap Coupling Compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng; Rao, Jia-Yu; Tai, Wen-Si; Wang, Ting; Liu, Fa-Lin

    2016-09-01

    In this paper, a kind of quasi eighth substrate integrated waveguide resonator (QESIWR) with defected fractal structure (DFS) is proposed firstly. Compared with the eighth substrate integrated waveguide resonator (ESIWR), this kind of resonator has lower resonant frequency (f0), acceptable unloaded quality (Qu) value and almost unchanged electric field distribution. In order to validate the properties of QESIWR, a cascaded quadruplet QESIWRs filter is designed and optimized. By using cross coupling and gap coupling compensation, this filter has two transmission zeros (TZs) at each side of the passband. Meanwhile, in comparison with the conventional ones, its size is cut down over 90 %. The measured results agree well with the simulated ones.

  4. Optical Design of the LSST Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivier, S S; Seppala, L; Gilmore, K

    2008-07-16

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, modified Paul-Baker design, with an 8.4-meter primary mirror, a 3.4-m secondary, and a 5.0-m tertiary feeding a camera system that includes a set of broad-band filters and refractive corrector lenses to produce a flat focal plane with a field of view of 9.6 square degrees. Optical design of the camera lenses and filters is integrated with optical design of telescope mirrors to optimize performance, resulting in excellent image quality over the entire field from ultra-violet to near infra-red wavelengths. The LSST camera optics design consists of three refractive lenses withmore » clear aperture diameters of 1.55 m, 1.10 m and 0.69 m and six interchangeable, broad-band, filters with clear aperture diameters of 0.75 m. We describe the methodology for fabricating, coating, mounting and testing these lenses and filters, and we present the results of detailed tolerance analyses, demonstrating that the camera optics will perform to the specifications required to meet their performance goals.« less

  5. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.

  6. Optimized method for atmospheric signal reduction in irregular sampled InSAR time series assisted by external atmospheric information

    NASA Astrophysics Data System (ADS)

    Gong, W.; Meyer, F. J.

    2013-12-01

    It is well known that spatio-temporal the tropospheric phase signatures complicate the interpretation and detection of smaller magnitude deformation signals or unstudied motion fields. Several advanced time-series InSAR techniques were developed in the last decade that make assumptions about the stochastic properties of the signal components in interferometric phases to reduce atmospheric delay effects on surface deformation estimates. However, their need for large datasets to successfully separate the different phase contributions limits their performance if data is scarce and irregularly sampled. Limited SAR data coverage is true for many areas affected by geophysical deformation. This is either due to their low priority in mission programming, unfavorable ground coverage condition, or turbulent seasonal weather effects. In this paper, we present new adaptive atmospheric phase filtering algorithms that are specifically designed to reconstruct surface deformation signals from atmosphere-affected and irregularly sampled InSAR time series. The filters take advantage of auxiliary atmospheric delay information that is extracted from various sources, e.g. atmospheric weather models. They are embedded into a model-free Persistent Scatterer Interferometry (PSI) approach that was selected to accommodate non-linear deformation patterns that are often observed near volcanoes and earthquake zones. Two types of adaptive phase filters were developed that operate in the time dimension and separate atmosphere from deformation based on their different temporal correlation properties. Both filter types use the fact that atmospheric models can reliably predict the spatial statistics and signal power of atmospheric phase delay fields in order to automatically optimize the filter's shape parameters. In essence, both filter types will attempt to maximize the linear correlation between a-priori and the extracted atmospheric phase information. Topography-related phase components, orbit errors and the master atmospheric delays are first removed in a pre-processing step before the atmospheric filters are applied. The first adaptive filter type is using a filter kernel of Gaussian shape and is adaptively adjusting the width (defined in days) of this filter until the correlation of extracted and modeled atmospheric signal power is maximized. If atmospheric properties vary along the time series, this approach will lead to filter setting that are adapted to best reproduce atmospheric conditions at a certain observation epoch. Despite the superior performance of this first filter design, its Gaussian shape imposes non-physical relative weights onto acquisitions that ignore the known atmospheric noise in the data. Hence, in our second approach we are using atmospheric a-priori information to adaptively define the full shape of the atmospheric filter. For this process, we use a so-called normalized convolution (NC) approach that is often used in image reconstruction. Several NC designs will be presented in this paper and studied for relative performance. A cross-validation of all developed algorithms was done using both synthetic and real data. This validation showed designed filters are outperforming conventional filter methods that particularly useful for regions with limited data coverage or lack of a deformation field prior.

  7. Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ensslin, Torsten A.; Frommert, Mona

    2011-05-15

    The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less

  8. An application of PSO algorithm for multi-criteria geometry optimization of printed low-pass filters based on conductive periodic structures

    NASA Astrophysics Data System (ADS)

    Steckiewicz, Adam; Butrylo, Boguslaw

    2017-08-01

    In this paper we discussed the results of a multi-criteria optimization scheme as well as numerical calculations of periodic conductive structures with selected geometry. Thin printed structures embedded on a flexible dielectric substrate may be applied as simple, cheap, passive low-pass filters with an adjustable cutoff frequency in low (up to 1 MHz) radio frequency range. The analysis of an electromagnetic phenomena in presented structures was realized on the basis of a three-dimensional numerical model of three proposed geometries of periodic elements. The finite element method (FEM) was used to obtain a solution of an electromagnetic harmonic field. Equivalent lumped electrical parameters of printed cells obtained in such manner determine the shape of an amplitude transmission characteristic of a low-pass filter. A nonlinear influence of a printed cell geometry on equivalent parameters of cells electric model, makes it difficult to find the desired optimal solution. Therefore an optimization problem of optimal cell geometry estimation with regard to an approximation of the determined amplitude transmission characteristic with an adjusted cutoff frequency, was obtained by the particle swarm optimization (PSO) algorithm. A dynamically suitable inertia factor was also introduced into the algorithm to improve a convergence to a global extremity of a multimodal objective function. Numerical results as well as PSO simulation results were characterized in terms of approximation accuracy of predefined amplitude characteristics in a pass-band, stop-band and cutoff frequency. Three geometries of varying degrees of complexity were considered and their use in signal processing systems was evaluated.

  9. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  10. An Unscented Kalman-Particle Hybrid Filter for Space Object Tracking

    NASA Astrophysics Data System (ADS)

    Raihan A. V, Dilshad; Chakravorty, Suman

    2018-03-01

    Optimal and consistent estimation of the state of space objects is pivotal to surveillance and tracking applications. However, probabilistic estimation of space objects is made difficult by the non-Gaussianity and nonlinearity associated with orbital mechanics. In this paper, we present an unscented Kalman-particle hybrid filtering framework for recursive Bayesian estimation of space objects. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. To assess the performance of the hybrid filtering approach, we consider two test cases of space objects that are assumed to undergo full three dimensional orbital motion under the effects of J 2 and atmospheric drag perturbations. It is demonstrated that the hybrid filters can furnish fast, accurate and consistent estimates outperforming standard UKF and particle filter (PF) implementations.

  11. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  12. Optimization of In-Cylinder Pressure Filter for Engine Research

    DTIC Science & Technology

    2017-06-01

    ARL-TR-8034 ● JUN 2017 US Army Research Laboratory Optimization of In-Cylinder Pressure Filter for Engine Research by Kenneth...Laboratory Optimization of In-Cylinder Pressure Filter for Engine Research by Kenneth S Kim, Michael T Szedlmayer, Kurt M Kruger, and Chol-Bum M...

  13. Robust guaranteed-cost adaptive quantum phase estimation

    NASA Astrophysics Data System (ADS)

    Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.

    2017-05-01

    Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.

  14. Systematic Biological Filter Design with a Desired I/O Filtering Response Based on Promoter-RBS Libraries.

    PubMed

    Hsu, Chih-Yuan; Pan, Zhen-Ming; Hu, Rei-Hsing; Chang, Chih-Chun; Cheng, Hsiao-Chun; Lin, Che; Chen, Bor-Sen

    2015-01-01

    In this study, robust biological filters with an external control to match a desired input/output (I/O) filtering response are engineered based on the well-characterized promoter-RBS libraries and a cascade gene circuit topology. In the field of synthetic biology, the biological filter system serves as a powerful detector or sensor to sense different molecular signals and produces a specific output response only if the concentration of the input molecular signal is higher or lower than a specified threshold. The proposed systematic design method of robust biological filters is summarized into three steps. Firstly, several well-characterized promoter-RBS libraries are established for biological filter design by identifying and collecting the quantitative and qualitative characteristics of their promoter-RBS components via nonlinear parameter estimation method. Then, the topology of synthetic biological filter is decomposed into three cascade gene regulatory modules, and an appropriate promoter-RBS library is selected for each module to achieve the desired I/O specification of a biological filter. Finally, based on the proposed systematic method, a robust externally tunable biological filter is engineered by searching the promoter-RBS component libraries and a control inducer concentration library to achieve the optimal reference match for the specified I/O filtering response.

  15. Development of a fast and feasible spectrum modeling technique for flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Woong; Bush, Karl; Mok, Ed

    Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less

  16. Optimal and fast E/B separation with a dual messenger field

    NASA Astrophysics Data System (ADS)

    Kodi Ramanah, Doogesh; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-05-01

    We adapt our recently proposed dual messenger algorithm for spin field reconstruction and showcase its efficiency and effectiveness in Wiener filtering polarized cosmic microwave background (CMB) maps. Unlike conventional preconditioned conjugate gradient (PCG) solvers, our preconditioner-free technique can deal with high-resolution joint temperature and polarization maps with inhomogeneous noise distributions and arbitrary mask geometries with relative ease. Various convergence diagnostics illustrate the high quality of the dual messenger reconstruction. In contrast, the PCG implementation fails to converge to a reasonable solution for the specific problem considered. The implementation of the dual messenger method is straightforward and guarantees numerical stability and convergence. We show how the algorithm can be modified to generate fluctuation maps, which, combined with the Wiener filter solution, yield unbiased constrained signal realizations, consistent with observed data. This algorithm presents a pathway to exact global analyses of high-resolution and high-sensitivity CMB data for a statistically optimal separation of E and B modes. It is therefore relevant for current and next-generation CMB experiments, in the quest for the elusive primordial B-mode signal.

  17. Generalized Optimal-State-Constraint Extended Kalman Filter (OSC-EKF)

    DTIC Science & Technology

    2017-02-01

    ARL-TR-7948• FEB 2017 US Army Research Laboratory GeneralizedOptimal-State-Constraint ExtendedKalman Filter (OSC-EKF) by James M Maley, Kevin...originator. ARL-TR-7948• FEB 2017 US Army Research Laboratory GeneralizedOptimal-State-Constraint ExtendedKalman Filter (OSC-EKF) by James M Maley Weapons and...

  18. Fabric filter model sensitivity analysis. Final report Jun 1978-Feb 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, R.; Klemm, H.A.; Battye, W.

    1979-04-01

    The report gives results of a series of sensitivity tests of a GCA fabric filter model, as a precursor to further laboratory and/or field tests. Preliminary tests had shown good agreement with field data. However, the apparent agreement between predicted and actual values was based on limited comparisons: validation was carried out without regard to optimization of the data inputs selected by the filter users or manufactures. The sensitivity tests involved introducing into the model several hypothetical data inputs that reflect the expected ranges in the principal filter system variables. Such factors as air/cloth ratio, cleaning frequency, amount of cleaning,more » specific resistence coefficient K2, the number of compartments, and inlet concentration were examined in various permutations. A key objective of the tests was to determine the variables that require the greatest accuracy in estimation based on their overall impact on model output. For K2 variations, the system resistance and emission properties showed little change; but the cleaning requirement changed drastically. On the other hand, considerable difference in outlet dust concentration was indicated when the degree of fabric cleaning was varied. To make the findings more useful to persons assessing the probable success of proposed or existing filter systems, much of the data output is presented in graphs or charts.« less

  19. A Low Cost Structurally Optimized Design for Diverse Filter Types

    PubMed Central

    Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar

    2016-01-01

    A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image processing applications especially in a constraint environment. PMID:27832133

  20. Improvement of LOD in Fluorescence Detection with Spectrally Nonuniform Background by Optimization of Emission Filtering.

    PubMed

    Galievsky, Victor A; Stasheuski, Alexander S; Krylov, Sergey N

    2017-10-17

    The limit-of-detection (LOD) in analytical instruments with fluorescence detection can be improved by reducing noise of optical background. Efficiently reducing optical background noise in systems with spectrally nonuniform background requires complex optimization of an emission filter-the main element of spectral filtration. Here, we introduce a filter-optimization method, which utilizes an expression for the signal-to-noise ratio (SNR) as a function of (i) all noise components (dark, shot, and flicker), (ii) emission spectrum of the analyte, (iii) emission spectrum of the optical background, and (iv) transmittance spectrum of the emission filter. In essence, the noise components and the emission spectra are determined experimentally and substituted into the expression. This leaves a single variable-the transmittance spectrum of the filter-which is optimized numerically by maximizing SNR. Maximizing SNR provides an accurate way of filter optimization, while a previously used approach based on maximizing a signal-to-background ratio (SBR) is the approximation that can lead to much poorer LOD specifically in detection of fluorescently labeled biomolecules. The proposed filter-optimization method will be an indispensable tool for developing new and improving existing fluorescence-detection systems aiming at ultimately low LOD.

  1. Nonlinear Estimation With Sparse Temporal Measurements

    DTIC Science & Technology

    2016-09-01

    Kalman filter , the extended Kalman filter (EKF) and unscented Kalman filter (UKF) are commonly used in practical application. The Kalman filter is an...optimal estimator for linear systems; the EKF and UKF are sub-optimal approximations of the Kalman filter . The EKF uses a first-order Taylor series...propagated covariance is compared for similarity with a Monte Carlo propagation. The similarity of the covariance matrices is shown to predict filter

  2. Optimizing techniques to capture and extract environmental DNA for detection and quantification of fish.

    PubMed

    Eichmiller, Jessica J; Miller, Loren M; Sorensen, Peter W

    2016-01-01

    Few studies have examined capture and extraction methods for environmental DNA (eDNA) to identify techniques optimal for detection and quantification. In this study, precipitation, centrifugation and filtration eDNA capture methods and six commercially available DNA extraction kits were evaluated for their ability to detect and quantify common carp (Cyprinus carpio) mitochondrial DNA using quantitative PCR in a series of laboratory experiments. Filtration methods yielded the most carp eDNA, and a glass fibre (GF) filter performed better than a similar pore size polycarbonate (PC) filter. Smaller pore sized filters had higher regression slopes of biomass to eDNA, indicating that they were potentially more sensitive to changes in biomass. Comparison of DNA extraction kits showed that the MP Biomedicals FastDNA SPIN Kit yielded the most carp eDNA and was the most sensitive for detection purposes, despite minor inhibition. The MoBio PowerSoil DNA Isolation Kit had the lowest coefficient of variation in extraction efficiency between lake and well water and had no detectable inhibition, making it most suitable for comparisons across aquatic environments. Of the methods tested, we recommend using a 1.5 μm GF filter, followed by extraction with the MP Biomedicals FastDNA SPIN Kit for detection. For quantification of eDNA, filtration through a 0.2-0.6 μm pore size PC filter, followed by extraction with MoBio PowerSoil DNA Isolation Kit was optimal. These results are broadly applicable for laboratory studies on carps and potentially other cyprinids. The recommendations can also be used to inform choice of methodology for field studies. © 2015 John Wiley & Sons Ltd.

  3. Assessment of noise in non-tectonic displacement derived from GRACE time-variable gravity filed

    NASA Astrophysics Data System (ADS)

    Li, Weiwei; Shen, Yunzhong

    2017-04-01

    Many studies have been focusing on estimating the noises in GNSS monitoring time series. While the noises of GNSS time series after the correction with non-tectonic displacement should be re-estimated. Knowing the noises in the non-tectonic can help to better identify the sources of re-estimated noises. However, there is a lack of knowledge of noises in the non-tectonic displacement. The objective of this work is to assess the noise in the non-tectonic displacement. GRACE time-variable gravity is used to reflect the global mass variation. The GRACE stokes coefficients of the gravity field are used to calculate the non-tectonic surface displacement at any point on the surface. The Atmosphere and Ocean AOD1B de-aliasing model to the GRACE solutions is added because the complete mass variation is requested. The monthly GRACE solutions from CSR, JPL, GFZ and Tongji span from January 2003 to September 2015 are compared. The degree-1 coefficients derived by Swenson et al (2008) are added and also the C20 terms are replaced with those obtained from Satellite Laser Ranging. The P4M6 decorrelation and Fan filter with a radius of 300 km are adopted to reduce the stripe errors. Optimal noise models for the 1054 stations in ITRF2014 are presented. It is found that white noise only take up a small proportion: less than 18% in horizontal and less than 13% in vertical. The dominant models in up and north components are ARMA and flicker, while in east the power law noise shows significance. The local distribution comparison of the optimal noise models among different products is quite similar, which shows that there is little dependence on the different strategies adopted. In addition, the reasons that caused to different distributions of the optimal noise models are also investigated. Meanwhile different filtering methods such as Gaussian filters, Han filters are applied to see whether the noise is related with filters. Keyword: optimal noise model; non-tectonic displacement;GRACE; local distribution; filters

  4. Computer image processing - The Viking experience. [digital enhancement techniques

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  5. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    PubMed

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.

  6. Enhancement of flow measurements using fluid-dynamic constraints

    NASA Astrophysics Data System (ADS)

    Egger, H.; Seitz, T.; Tropea, C.

    2017-09-01

    Novel experimental modalities acquire spatially resolved velocity measurements for steady state and transient flows which are of interest for engineering and biological applications. One of the drawbacks of such high resolution velocity data is their susceptibility to measurement errors. In this paper, we propose a novel filtering strategy that allows enhancement of the noisy measurements to obtain reconstruction of smooth divergence free velocity and corresponding pressure fields which together approximately comply to a prescribed flow model. The main step in our approach consists of the appropriate use of the velocity measurements in the design of a linearized flow model which can be shown to be well-posed and consistent with the true velocity and pressure fields up to measurement and modeling errors. The reconstruction procedure is then formulated as an optimal control problem for this linearized flow model. The resulting filter has analyzable smoothing and approximation properties. We briefly discuss the discretization of the approach by finite element methods and comment on the efficient solution by iterative methods. The capability of the proposed filter to significantly reduce data noise is demonstrated by numerical tests including the application to experimental data. In addition, we compare with other methods like smoothing and solenoidal filtering.

  7. Efficient tight focusing of laser beams optimally matched to their thin-film linear-to-radial polarization conversion: Method, implementation, and field near focus

    NASA Astrophysics Data System (ADS)

    Sedukhin, Andrey G.; Poleshchuk, Alexander G.

    2018-01-01

    A method is proposed for efficient, rotationally symmetric, tight mirror focusing of laser beams that is optimally matched to their thin-film linear-to-radial polarization conversion by a constant near-Brewster angle of incidence of the beams onto a polarizing element. Two optical systems and their modifications are considered that are based on this method and on the use of Toraldo filters. If focusing components of these systems operate in media with refractive indices equal to that of the focal region, they take the form of an axicon and an annular reflector generated by the revolution of an inclined parabola around the optical axis. Vectorial formulas for calculating the diffracted field near the focus of these systems are derived. Also presented are the results of designing a thin-film obliquely illuminated polarizer and a numerical simulation of deep UV laser beams generated by one of the systems and focused in an immersion liquid. The transverse and axial sizes of a needle longitudinally polarized field generated by the system with a simplest phase Toraldo filter were found to be 0.39 λ and 10.5 λ, with λ being the wavelength in the immersion liquid.

  8. On-Orbit Multi-Field Wavefront Control with a Kalman Filter

    NASA Technical Reports Server (NTRS)

    Lou, John; Sigrist, Norbert; Basinger, Scott; Redding, David

    2008-01-01

    A document describes a multi-field wavefront control (WFC) procedure for the James Webb Space Telescope (JWST) on-orbit optical telescope element (OTE) fine-phasing using wavefront measurements at the NIRCam pupil. The control is applied to JWST primary mirror (PM) segments and secondary mirror (SM) simultaneously with a carefully selected ordering. Through computer simulations, the multi-field WFC procedure shows that it can reduce the initial system wavefront error (WFE), as caused by random initial system misalignments within the JWST fine-phasing error budget, from a few dozen micrometers to below 50 nm across the entire NIRCam Field of View, and the WFC procedure is also computationally stable as the Monte-Carlo simulations indicate. With the incorporation of a Kalman Filter (KF) as an optical state estimator into the WFC process, the robustness of the JWST OTE alignment process can be further improved. In the presence of some large optical misalignments, the Kalman state estimator can provide a reasonable estimate of the optical state, especially for those degrees of freedom that have a significant impact on the system WFE. The state estimate allows for a few corrections to the optical state to push the system towards its nominal state, and the result is that a large part of the WFE can be eliminated in this step. When the multi-field WFC procedure is applied after Kalman state estimate and correction, the stability of fine-phasing control is much more certain. Kalman Filter has been successfully applied to diverse applications as a robust and optimal state estimator. In the context of space-based optical system alignment based on wavefront measurements, a KF state estimator can combine all available wavefront measurements, past and present, as well as measurement and actuation error statistics to generate a Maximum-Likelihood optimal state estimator. The strength and flexibility of the KF algorithm make it attractive for use in real-time optical system alignment when WFC alone cannot effectively align the system.

  9. Wiener filtering of the COBE Differential Microwave Radiometer data

    NASA Technical Reports Server (NTRS)

    Bunn, Emory F.; Fisher, Karl B.; Hoffman, Yehuda; Lahav, Ofer; Silk, Joseph; Zaroubi, Saleem

    1994-01-01

    We derive an optimal linear filter to suppress the noise from the cosmic background explorer satellite (COBE) Differential Microwave Radiometer (DMR) sky maps for a given power spectrum. We then apply the filter to the first-year DMR data, after removing pixels within 20 deg of the Galactic plane from the data. We are able to identify particular hot and cold spots in the filtered maps at a level 2 to 3 times the noise level. We use the formalism of constrained realizations of Gaussian random fields to assess the uncertainty in the filtered sky maps. In addition to improving the signal-to-noise ratio of the map as a whole, these techniques allow us to recover some information about the cosmic microwave background anisotropy in the missing Galactic plane region. From these maps we are able to determine which hot and cold spots in the data are statistically significant, and which may have been produced by noise. In addition, the filtered maps can be used for comparison with other experiments on similar angular scales.

  10. Modelling hydrology of a single bioretention system with HYDRUS-1D.

    PubMed

    Meng, Yingying; Wang, Huixiao; Chen, Jiangang; Zhang, Shuhan

    2014-01-01

    A study was carried out on the effectiveness of bioretention systems to abate stormwater using computer simulation. The hydrologic performance was simulated for two bioretention cells using HYDRUS-1D, and the simulation results were verified by field data of nearly four years. Using the validated model, the optimization of design parameters of rainfall return period, filter media depth and type, and surface area was discussed. And the annual hydrologic performance of bioretention systems was further analyzed under the optimized parameters. The study reveals that bioretention systems with underdrains and impervious boundaries do have some detention capability, while their total water retention capability is extremely limited. Better detention capability is noted for smaller rainfall events, deeper filter media, and design storms with a return period smaller than 2 years, and a cost-effective filter media depth is recommended in bioretention design. Better hydrologic effectiveness is achieved with a higher hydraulic conductivity and ratio of the bioretention surface area to the catchment area, and filter media whose conductivity is between the conductivity of loamy sand and sandy loam, and a surface area of 10% of the catchment area is recommended. In the long-term simulation, both infiltration volume and evapotranspiration are critical for the total rainfall treatment in bioretention systems.

  11. Deep learning architecture for iris recognition based on optimal Gabor filters and deep belief network

    NASA Astrophysics Data System (ADS)

    He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang

    2017-03-01

    Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.

  12. A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Hayward, Gordon

    2017-02-01

    The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.

  13. Online analysis of five organic ultraviolet filters in environmental water samples using magnetism-enhanced monolith-based in-tube solid phase microextraction coupled with high-performance liquid chromatography.

    PubMed

    Mei, Meng; Huang, Xiaojia

    2017-11-24

    Due to the endocrine disrupting properties, organic UV filters have been a great risk for humans and other organisms. Therefore, development of accurate and effective analytical methods is needed for the determination of UV filters in environmental waters. In this work, a fast, sensitive and environmentally friendly method combining magnetism-enhanced monolith-based in-tube solid phase microextraction with high-performance liquid chromatography with diode array detection (DAD) (ME-MB-IT/SPME-HPLC-DAD) for the online analysis of five organic UV filters in environmental water samples was developed. To extract UV filters effectively, an ionic liquid-based monolithic capillary column doped with magnetic nanoparticles was prepared by in-situ polymerization and used as extraction medium of online ME-MB-IT/SPME-HPLC-DAD system. Several extraction conditions including the intensity of magnetic field, sampling and desorption flow rate, volume of sample and desorption solvent, pH value and ionic strength of sample matrix were optimized thoroughly. Under the optimized conditions, the extraction efficiencies for five organic UV filters were in the range of 44.0-100%. The limits of detection (S/N=3) and limits of quantification (S/N=10) were 0.04-0.26μg/L and 0.12-0.87μg/L, respectively. The precisions indicated by relative standard deviations (RSDs) were less than 10% for both intra- and inter-day variabilities. Finally, the developed method was successfully applied to the determination of UV filters in three environmental water samples and satisfactory results were obtained. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Development of a variable structure-based fault detection and diagnosis strategy applied to an electromechanical system

    NASA Astrophysics Data System (ADS)

    Gadsden, S. Andrew; Kirubarajan, T.

    2017-05-01

    Signal processing techniques are prevalent in a wide range of fields: control, target tracking, telecommunications, robotics, fault detection and diagnosis, and even stock market analysis, to name a few. Although first introduced in the 1950s, the most popular method used for signal processing and state estimation remains the Kalman filter (KF). The KF offers an optimal solution to the estimation problem under strict assumptions. Since this time, a number of other estimation strategies and filters were introduced to overcome robustness issues, such as the smooth variable structure filter (SVSF). In this paper, properties of the SVSF are explored in an effort to detect and diagnosis faults in an electromechanical system. The results are compared with the KF method, and future work is discussed.

  15. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGES

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  16. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  17. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence.

    PubMed

    Li, Sui-Xian

    2018-05-07

    Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ₂ norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  18. Prototype color field sequential television lens assembly

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The design, development, and evaluation of a prototype modular lens assembly with a self-contained field sequential color wheel is presented. The design of a color wheel of maximum efficiency, the selection of spectral filters, and the design of a quiet, efficient wheel drive system are included. Design tradeoffs considered for each aspect of the modular assembly are discussed. Emphasis is placed on achieving a design which can be attached directly to an unmodified camera, thus permitting use of the assembly in evaluating various candidate camera and sensor designs. A technique is described which permits maintaining high optical efficiency with an unmodified camera. A motor synchronization system is developed which requires only the vertical synchronization signal as a reference frequency input. Equations and tradeoff curves are developed to permit optimizing the filter wheel aperture shapes for a variety of different design conditions.

  19. An optimal filter for short photoplethysmogram signals

    PubMed Central

    Liang, Yongbo; Elgendi, Mohamed; Chen, Zhencheng; Ward, Rabab

    2018-01-01

    A photoplethysmogram (PPG) contains a wealth of cardiovascular system information, and with the development of wearable technology, it has become the basic technique for evaluating cardiovascular health and detecting diseases. However, due to the varying environments in which wearable devices are used and, consequently, their varying susceptibility to noise interference, effective processing of PPG signals is challenging. Thus, the aim of this study was to determine the optimal filter and filter order to be used for PPG signal processing to make the systolic and diastolic waves more salient in the filtered PPG signal using the skewness quality index. Nine types of filters with 10 different orders were used to filter 219 (2.1s) short PPG signals. The signals were divided into three categories by PPG experts according to their noise levels: excellent, acceptable, or unfit. Results show that the Chebyshev II filter can improve the PPG signal quality more effectively than other types of filters and that the optimal order for the Chebyshev II filter is the 4th order. PMID:29714722

  20. Optimal Appearance Model for Visual Tracking

    PubMed Central

    Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao

    2016-01-01

    Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639

  1. Spectral optimized asymmetric segmented phase-only correlation filter.

    PubMed

    Leonard, I; Alfalou, A; Brosseau, C

    2012-05-10

    We suggest a new type of optimized composite filter, i.e., the asymmetric segmented phase-only filter (ASPOF), for improving the effectiveness of a VanderLugt correlator (VLC) when used for face identification. Basically, it consists in merging several reference images after application of a specific spectral optimization method. After segmentation of the spectral filter plane to several areas, each area is assigned to a single winner reference according to a new optimized criterion. The point of the paper is to show that this method offers a significant performance improvement on standard composite filters for face identification. We first briefly revisit composite filters [adapted, phase-only, inverse, compromise optimal, segmented, minimum average correlation energy, optimal trade-off maximum average correlation, and amplitude-modulated phase-only (AMPOF)], which are tools of choice for face recognition based on correlation techniques, and compare their performances with those of the ASPOF. We illustrate some of the drawbacks of current filters for several binary and grayscale image identifications. Next, we describe the optimization steps and introduce the ASPOF that can overcome these technical issues to improve the quality and the reliability of the correlation-based decision. We derive performance measures, i.e., PCE values and receiver operating characteristic curves, to confirm consistency of the results. We numerically find that this filter increases the recognition rate and decreases the false alarm rate. The results show that the discrimination of the ASPOF is comparable to that of the AMPOF, but the ASPOF is more robust than the trade-off maximum average correlation height against rotation and various types of noise sources. Our method has several features that make it amenable to experimental implementation using a VLC.

  2. Broadband spatial optical filtering with a volume Bragg grating and a blazed grating pair

    NASA Astrophysics Data System (ADS)

    Chen, Guanjin; Sun, Xiaojie; Yuan, Xiao; Zhang, Guiju

    2017-10-01

    A broadband spatial optical filtering system is presented in this paper, which is composed of a Volume Bragg Grating (VBG) and a blazed grating pair. The diffraction efficiency and filtering properties are calculated and simulated by using Fourier diffraction analysis and Coupled Wave Theory. A blazed grating pair and VBG structures are designed and optimized in our simulation. The diffraction efficiency of filtering system shows more than 77.2% during the wavelength period from 953nm to 1153nm, especially 84.1% at the center wavelength. The beam quality is described with near-field modulation (M) and contrast ratio (C). The M of filtering beam are 1.44, 1.49 and 1.55, respectively and the C of filtering beam are 10.1%, 10.2% and 10.5% , respectively and the beam intensity distribution is great improved. The cut-off frequencies of three filtering systems are 1.57mm-1 , 2.06 mm-1 and 2.38 mm-1 , respectively from power spectral density (PSD) curve. It's clear that the cut-off frequency of filtering system is closely related to the angular selectivity of VBG, and the value of cut-off frequency is decided by VBG's Half Width at First Zero (HWFZ) and center wavelength.

  3. Grayscale Optical Correlator Workbench

    NASA Technical Reports Server (NTRS)

    Hanan, Jay; Zhou, Hanying; Chao, Tien-Hsin

    2006-01-01

    Grayscale Optical Correlator Workbench (GOCWB) is a computer program for use in automatic target recognition (ATR). GOCWB performs ATR with an accurate simulation of a hardware grayscale optical correlator (GOC). This simulation is performed to test filters that are created in GOCWB. Thus, GOCWB can be used as a stand-alone ATR software tool or in combination with GOC hardware for building (target training), testing, and optimization of filters. The software is divided into three main parts, denoted filter, testing, and training. The training part is used for assembling training images as input to a filter. The filter part is used for combining training images into a filter and optimizing that filter. The testing part is used for testing new filters and for general simulation of GOC output. The current version of GOCWB relies on the mathematical software tools from MATLAB binaries for performing matrix operations and fast Fourier transforms. Optimization of filters is based on an algorithm, known as OT-MACH, in which variables specified by the user are parameterized and the best filter is selected on the basis of an average result for correct identification of targets in multiple test images.

  4. Design Method of Digital Optimal Control Scheme and Multiple Paralleled Bridge Type Current Amplifier for Generating Gradient Magnetic Fields in MRI Systems

    NASA Astrophysics Data System (ADS)

    Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo

    This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.

  5. Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement

    NASA Astrophysics Data System (ADS)

    Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.

    In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.

  6. Initial Ares I Bending Filter Design

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark

    2007-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.

  7. The selection of the optimal baseline in the front-view monocular vision system

    NASA Astrophysics Data System (ADS)

    Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.

  8. Microwave monolithic filter and phase shifter using magnetic nanostructures

    NASA Astrophysics Data System (ADS)

    Aslam, Shehreen; Khanna, Manoj; Veenugopal, Veerakumar; Kuanr, Bijoy K.

    2018-05-01

    Monolithic Microwave Integrated Circuit (MMIC) have major impact on the development of microwave communication technology. Transition metal based ferromagnetic nano-wired (FMNWs) substrate are of special interest in order to fabricate these MMIC devices. Their saturation magnetization is comparatively higher than ferrites which makes them suitable for high frequency (>10 ˜ 40 GHz) operation at zero or a small applied magnetic field. The CoFeB nanowires in anodic alumina templates were synthesized using three-electrode electro-deposition system. After electro-deposition, 1μm thick Cu layer was sputtered on the top surface of FMNW substrate and lithography was done to design microstrip lines. These microstrip transmission lines were tested for band-stop filters and phase shifters based on ferromagnetic resonance (FMR) over a wide applied magnetic field (H) range. It was observed that attenuation and frequency increase with the increase of magnetic field (upto 5.3 kOe). For phase shifter, the influence of magnetic material was studied for two frequency regions: (i) below FMR and (ii) above FMR. These two frequency regions were suitable for many practical device applications as the insertion loss was very less in these regions in comparison to resonance frequency regions. In the high frequency region (at 35 GHz), the optimal differential phase shift increased significantly to ˜ 250 deg/cm and around low frequency region (at 24 GHz), the optimal differential phase shift is ˜175 deg/cm at the highest field (H) value.

  9. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  10. Application of 3D triangulations of airborne laser scanning data to estimate boreal forest leaf area index

    NASA Astrophysics Data System (ADS)

    Majasalmi, Titta; Korhonen, Lauri; Korpela, Ilkka; Vauhkonen, Jari

    2017-07-01

    We propose 3D triangulations of airborne Laser Scanning (ALS) point clouds as a new approach to derive 3D canopy structures and to estimate forest canopy effective LAI (LAIe). Computational geometry and topological connectivity were employed to filter the triangulations to yield a quasi-optimal relationship with the field measured LAIe. The optimal filtering parameters were predicted based on ALS height metrics, emulating the production of maps of LAIe and canopy volume for large areas. The LAIe from triangulations was validated with field measured LAIe and compared with a reference LAIe calculated from ALS data using logarithmic model based on Beer's law. Canopy transmittance was estimated using All Echo Cover Index (ACI), and the mean projection of unit foliage area (β) was obtained using no-intercept regression with field measured LAIe. We investigated the influence species and season on the triangulated LAIe and demonstrated the relationship between triangulated LAIe and canopy volume. Our data is from 115 forest plots located at the southern boreal forest area in Finland and for each plot three different ALS datasets were available to apply the triangulations. The triangulation approach was found applicable for both leaf-on and leaf-off datasets after initial calibration. Results showed the Root Mean Square Errors (RMSEs) between LAIe from triangulations and field measured values agreed the most using the highest pulse density data (RMSE = 0.63, the coefficient of determination (R2) = 0.53). Yet, the LAIe calculated using ACI-index agreed better with the field measured LAIe (RMSE = 0.53 and R2 = 0.70). The best models to predict the optimal alpha value contained the ACI-index, which indicates that within-crown transmittance is accounted by the triangulation approach. The cover indices may be recommended for retrieving LAIe only, but for applications which require more sophisticated information on canopy shape and volume, such as radiative transfer models, the triangulation approach may be preferred.

  11. Iterative dip-steering median filter

    NASA Astrophysics Data System (ADS)

    Huo, Shoudong; Zhu, Weihong; Shi, Taikun

    2017-09-01

    Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.

  12. Optimal frequency domain textural edge detection filter

    NASA Technical Reports Server (NTRS)

    Townsend, J. K.; Shanmugan, K. S.; Frost, V. S.

    1985-01-01

    An optimal frequency domain textural edge detection filter is developed and its performance evaluated. For the given model and filter bandwidth, the filter maximizes the amount of output image energy placed within a specified resolution interval centered on the textural edge. Filter derivation is based on relating textural edge detection to tonal edge detection via the complex low-pass equivalent representation of narrowband bandpass signals and systems. The filter is specified in terms of the prolate spheroidal wave functions translated in frequency. Performance is evaluated using the asymptotic approximation version of the filter. This evaluation demonstrates satisfactory filter performance for ideal and nonideal textures. In addition, the filter can be adjusted to detect textural edges in noisy images at the expense of edge resolution.

  13. Finding knowledge translation articles in CINAHL.

    PubMed

    Lokker, Cynthia; McKibbon, K Ann; Wilczynski, Nancy L; Haynes, R Brian; Ciliska, Donna; Dobbins, Maureen; Davis, David A; Straus, Sharon E

    2010-01-01

    The process of moving research into practice has a number of names including knowledge translation (KT). Researchers and decision makers need to be able to readily access the literature on KT for the field to grow and to evaluate the existing evidence. To develop and validate search filters for finding KT articles in the database Cumulative Index to Nursing and Allied Health (CINAHL). A gold standard database was constructed by hand searching and classifying articles from 12 journals as KT Content, KT Applications and KT Theory. Sensitivity, specificity, precision, and accuracy of the search filters. Optimized search filters had fairly low sensitivity and specificity for KT Content (58.4% and 64.9% respectively), while sensitivity and specificity increased for retrieving KT Application (67.5% and 70.2%) and KT Theory articles (70.4% and 77.8%). Search filter performance was suboptimal marking the broad base of disciplines and vocabularies used by KT researchers. Such diversity makes retrieval of KT studies in CINAHL difficult.

  14. Modeling of a field-widened Michelson interferometric filter for application in a high spectral resolution lidar

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Hostetler, Chris; Cook, Anthony; Miller, Ian; Hair, Johnathan

    2011-11-01

    High spectral resolution lidars (HSRLs) are increasingly being deployed on aircraft and called for on future space-based missions. The HSRL technique relies on spectral discrimination of the atmospheric backscatter signals to enable independent, unambiguous retrieval of aerosol extinction and backscatter. A compact, monolithic field-widened Michelson interferometer is being developed as the spectral discrimination filter for an HSRL system at NASA Langley Research Center. The interferometer consists of a cubic beam splitter, a solid glass arm, and an air arm. The spacer that connects the air arm mirror to the main part of the interferometer is designed to optimize thermal compensation such that the maximum interference can be tuned with great precision to the transmitted laser wavelength. In this paper, a comprehensive radiometric model for the field-widened Michelson interferometeric spectral filter is presented. The model incorporates the angular distribution and finite cross sectional area of the light source, reflectance of all surfaces, loss of absorption, and lack of parallelism between the air-arm and solid arm, etc. The model can be used to assess the performance of the interferometer and thus it is a useful tool to evaluate performance budgets and to set optical specifications for new designs of the same basic interferometer type.

  15. Analysis and design of planar waveguide elements for use in filters and sensors

    NASA Astrophysics Data System (ADS)

    Chen, Guangzhou

    In this dissertation we present both theoretical analysis and practical design considerations for planar optical waveguide devices. The analysis takes into account both transverse dimensions of the waveguides and is based on supermode theory combined with the resonance method for the determination of the propagation constants and field profiles of the supermodes. An improved accuracy has been achieved by including corrections due to the fields in the corner regions of the waveguides using perturbation theory. We analyze in detail two particular devices, an optical filter/combiner and an optical sensor. An optical wavelength filter/combiner is a common element in an integrated optical circuit. A new "bend free" filter/combiner is proposed and analyzed. The new wavelength filter consists of only straight parallel channels, which considerably simplify both the analysis and fabrication of the device. We show in detail how the operation of the device depends upon each of the design parameters. The intrinsic power loss in the proposed filter/combiner is minimized. The optical sensor is another important device and the sensitivity of measurement is an important issue in its design. Two operating mechanisms used in prior optical sensors are evanescent wave sensing or surface plasmon excitation. In this dissertation, we present a sensor with a directional coupler structure in which a measurand to be detected is interfaced with one side of the cladding. The analysis shows that it is possible to make a high resolution device by adjusting the design parameters. The dimensions and materials used in an optimized design are presented.

  16. Parallel filtering in global gyrokinetic simulations

    NASA Astrophysics Data System (ADS)

    Jolliet, S.; McMillan, B. F.; Villard, L.; Vernay, T.; Angelino, P.; Tran, T. M.; Brunner, S.; Bottino, A.; Idomura, Y.

    2012-02-01

    In this work, a Fourier solver [B.F. McMillan, S. Jolliet, A. Bottino, P. Angelino, T.M. Tran, L. Villard, Comp. Phys. Commun. 181 (2010) 715] is implemented in the global Eulerian gyrokinetic code GT5D [Y. Idomura, H. Urano, N. Aiba, S. Tokuda, Nucl. Fusion 49 (2009) 065029] and in the global Particle-In-Cell code ORB5 [S. Jolliet, A. Bottino, P. Angelino, R. Hatzky, T.M. Tran, B.F. McMillan, O. Sauter, K. Appert, Y. Idomura, L. Villard, Comp. Phys. Commun. 177 (2007) 409] in order to reduce the memory of the matrix associated with the field equation. This scheme is verified with linear and nonlinear simulations of turbulence. It is demonstrated that the straight-field-line angle is the coordinate that optimizes the Fourier solver, that both linear and nonlinear turbulent states are unaffected by the parallel filtering, and that the k∥ spectrum is independent of plasma size at fixed normalized poloidal wave number.

  17. Sci-Thur AM: YIS – 07: Optimizing dual-energy x-ray parameters using a single filter for both high and low-energy images to enhance soft-tissue imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, Wesley; Sattarivand, Mike

    Objective: To optimize dual-energy parameters of ExacTrac stereoscopic x-ray imaging system for lung SBRT patients Methods: Simulated spectra and a lung phantom were used to optimize filter material, thickness, kVps, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number (Z) range [3–83] based on a metric defined to separate spectrums of high and low energies. Both energies used the same filter due to time constraints of image acquisition in lung SBRT imaging. A lung phantom containing bone, soft tissue, and a tumor mimicking material was imaged with filter thicknessesmore » range [0–1] mm and kVp range [60–140]. A cost function based on contrast-to-noise-ratio of bone, soft tissue, and tumor, as well as image noise content, was defined to optimize filter thickness and kVp. Using the optimized parameters, dual-energy images of anthropomorphic Rando phantom were acquired and evaluated for bone subtraction. Imaging dose was measured with dual-energy technique using tin filtering. Results: Tin was the material of choice providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-only image in the lung phantom was obtained using 0.3 mm tin and [140, 80] kVp pair. Dual-energy images of the Rando phantom had noticeable bone elimination when compared to no filtration. Dose was lower with tin filtering compared to no filtration. Conclusions: Dual-energy soft-tissue imaging is feasible using ExacTrac stereoscopic imaging system utilizing a single tin filter for both high and low energies and optimized acquisition parameters.« less

  18. Optimal Search Strategy for the Definition of a DNAPL Source

    DTIC Science & Technology

    2009-08-01

    29. Flow field results for stochastic model (colored contours) and potentiometric map created by hydrogeologist using well water level measurements...potentiometric map created by hydrogeologist using well water level measurements (black contours). 5.1.3. Source search algorithm Figure 30 shows the 15...and C. D. Tankersley, “Forecasting piezometric head levels in the Floridian aquifer: A Kalman filtering approach”, Water Resources Research, 29(11

  19. Joint Transmit and Receive Filter Optimization for Sub-Nyquist Delay-Doppler Estimation

    NASA Astrophysics Data System (ADS)

    Lenz, Andreas; Stein, Manuel S.; Swindlehurst, A. Lee

    2018-05-01

    In this article, a framework is presented for the joint optimization of the analog transmit and receive filter with respect to a parameter estimation problem. At the receiver, conventional signal processing systems restrict the two-sided bandwidth of the analog pre-filter $B$ to the rate of the analog-to-digital converter $f_s$ to comply with the well-known Nyquist-Shannon sampling theorem. In contrast, here we consider a transceiver that by design violates the common paradigm $B\\leq f_s$. To this end, at the receiver, we allow for a higher pre-filter bandwidth $B>f_s$ and study the achievable parameter estimation accuracy under a fixed sampling rate when the transmit and receive filter are jointly optimized with respect to the Bayesian Cram\\'{e}r-Rao lower bound. For the case of delay-Doppler estimation, we propose to approximate the required Fisher information matrix and solve the transceiver design problem by an alternating optimization algorithm. The presented approach allows us to explore the Pareto-optimal region spanned by transmit and receive filters which are favorable under a weighted mean squared error criterion. We also discuss the computational complexity of the obtained transceiver design by visualizing the resulting ambiguity function. Finally, we verify the performance of the optimized designs by Monte-Carlo simulations of a likelihood-based estimator.

  20. Development of an optimal filter substrate for the identification of small microplastic particles in food by micro-Raman spectroscopy.

    PubMed

    Oßmann, Barbara E; Sarau, George; Schmitt, Sebastian W; Holtmannspötter, Heinrich; Christiansen, Silke H; Dicke, Wilhelm

    2017-06-01

    When analysing microplastics in food, due to toxicological reasons it is important to achieve clear identification of particles down to a size of at least 1 μm. One reliable, optical analytical technique allowing this is micro-Raman spectroscopy. After isolation of particles via filtration, analysis is typically performed directly on the filter surface. In order to obtain high qualitative Raman spectra, the material of the membrane filters should not show any interference in terms of background and Raman signals during spectrum acquisition. To facilitate the usage of automatic particle detection, membrane filters should also show specific optical properties. In this work, beside eight different, commercially available membrane filters, three newly designed metal-coated polycarbonate membrane filters were tested to fulfil these requirements. We found that aluminium-coated polycarbonate membrane filters had ideal characteristics as a substrate for micro-Raman spectroscopy. Its spectrum shows no or minimal interference with particle spectra, depending on the laser wavelength. Furthermore, automatic particle detection can be applied when analysing the filter surface under dark-field illumination. With this new membrane filter, analytics free of interference of microplastics down to a size of 1 μm becomes possible. Thus, an important size class of these contaminants can now be visualized and spectrally identified. Graphical abstract A newly developed aluminium coated polycarbonate membrane filter enables automatic particle detection and generation of high qualitative Raman spectra allowing identification of small microplastics.

  1. Total skin electron therapy in the lying‐on‐the‐floor position using a customized flattening filter to accommodate frail patients

    PubMed Central

    Antolak, John A.

    2013-01-01

    A total skin electron (TSE) floor technique is presented for treating patients who are unable to safely stand for extended durations. A customized flattening filter is used to eliminate the need for field junctioning, improve field uniformity, and reduce setup time. The flattening filter is constructed from copper and polycarbonate, fits into the linac's accessory slot, and is optimized to extend the useful height and width of the beam such that no field junctions are needed during treatment. A TSE floor with flattening filter (TSE FF) treatment course consisted of six patient positions: three supine and three prone. For all treatment fields, electron beam energy was 6 MeV; collimator settings were an x of 30 cm, y of 40 cm, and θcoll of 0°; and a 0.4 cm thick polycarbonate spoiler was positioned in front of the patient. Percent depth dose (PDD) and photon contamination for the TSE FF technique were compared with our standard technique, which is similar to the Stanford technique. Beam profiles were measured using radiochromic film, and dose uniformity was verified using an anthropomorphic radiological phantom. The TSE FF technique met field uniformity requirements specified by the American Association of Physicists in Medicine Task Group 30. TSE FF R80 ranges from 4 to 4.8 mm. TSE FF photon contamination was ~ 3%. Anthropomorphic radiological phantom verification demonstrated that dose to the entire skin surface was expected to be within about ±15% of the prescription dose, except for the perineum, scalp vertex, top of shoulder, and soles of the feet. The TSE floor technique presented herein eliminates field junctioning, is suitable for patients who cannot safely stand during treatment, and provides comparable quality and uniformity to the Stanford technique. PACS number: 87 PMID:24036864

  2. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  3. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  4. Statistical model for speckle pattern optimization.

    PubMed

    Su, Yong; Zhang, Qingchuan; Gao, Zeren

    2017-11-27

    Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.

  5. Robust Controller for Turbulent and Convective Boundary Layers

    DTIC Science & Technology

    2006-08-01

    filter and an optimal regulator. The Kalman filter equation and the optimal regulator equation corresponding to the state-space equations, (2.20), are...separate steady-state algebraic Riccati equations. The Kalman filter is used here as a state observer rather than as an estimator since no noises are...2001) which will not be repeated here. For robustness, in the design, the Kalman filter input matrix G has been set equal to the control input

  6. Adaptive torque estimation of robot joint with harmonic drive transmission

    NASA Astrophysics Data System (ADS)

    Shi, Zhiguo; Li, Yuankai; Liu, Guangjun

    2017-11-01

    Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.

  7. Susceptibility-weighted imaging using inter-echo-variance channel combination for improved contrast at 7 tesla.

    PubMed

    Hosseini, Zahra; Liu, Junmin; Solovey, Igor; Menon, Ravi S; Drangova, Maria

    2017-04-01

    To implement and optimize a new approach for susceptibility-weighted image (SWI) generation from multi-echo multi-channel image data and compare its performance against optimized traditional SWI pipelines. Five healthy volunteers were imaged at 7 Tesla. The inter-echo-variance (IEV) channel combination, which uses the variance of the local frequency shift at multiple echo times as a weighting factor during channel combination, was used to calculate multi-echo local phase shift maps. Linear phase masks were combined with the magnitude to generate IEV-SWI. The performance of the IEV-SWI pipeline was compared with that of two accepted SWI pipelines-channel combination followed by (i) Homodyne filtering (HPH-SWI) and (ii) unwrapping and high-pass filtering (SVD-SWI). The filtering steps of each pipeline were optimized. Contrast-to-noise ratio was used as the comparison metric. Qualitative assessment of artifact and vessel conspicuity was performed and processing time of pipelines was evaluated. The optimized IEV-SWI pipeline (σ = 7 mm) resulted in continuous vessel visibility throughout the brain. IEV-SWI had significantly higher contrast compared with HPH-SWI and SVD-SWI (P < 0.001, Friedman nonparametric test). Residual background fields and phase wraps in HPH-SWI and SVD-SWI corrupted the vessel signal and/or generated vessel-mimicking artifact. Optimized implementation of the IEV-SWI pipeline processed a six-echo 16-channel dataset in under 10 min. IEV-SWI benefits from channel-by-channel processing of phase data and results in high contrast images with an optimal balance between contrast and background noise removal, thereby presenting evidence of importance of the order in which postprocessing techniques are applied for multi-channel SWI generation. 2 J. Magn. Reson. Imaging 2017;45:1113-1124. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Estimation Filter for Alignment of the Spitzer Space Telescope

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2007-01-01

    A document presents a summary of an onboard estimation algorithm now being used to calibrate the alignment of the Spitzer Space Telescope (formerly known as the Space Infrared Telescope Facility). The algorithm, denoted the S2P calibration filter, recursively generates estimates of the alignment angles between a telescope reference frame and a star-tracker reference frame. At several discrete times during the day, the filter accepts, as input, attitude estimates from the star tracker and observations taken by the Pointing Control Reference Sensor (a sensor in the field of view of the telescope). The output of the filter is a calibrated quaternion that represents the best current mean-square estimate of the alignment angles between the telescope and the star tracker. The S2P calibration filter incorporates a Kalman filter that tracks six states - two for each of three orthogonal coordinate axes. Although, in principle, one state per axis is sufficient, the use of two states per axis makes it possible to model both short- and long-term behaviors. Specifically, the filter properly models transient learning, characteristic times and bounds of thermomechanical drift, and long-term steady-state statistics, whether calibration measurements are taken frequently or infrequently. These properties ensure that the S2P filter performance is optimal over a broad range of flight conditions, and can be confidently run autonomously over several years of in-flight operation without human intervention.

  9. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  10. Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement.

    PubMed

    Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon

    2017-02-24

    The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers.

  11. Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement

    PubMed Central

    Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon

    2017-01-01

    The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers. PMID:28245584

  12. Optical Correlation of Images With Signal-Dependent Noise Using Constrained-Modulation Filter Devices

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.

  13. Fine-tuning to minimize emittances of J-PARC RF-driven H{sup −} ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueno, A., E-mail: akira.ueno@j-parc.jp; Ohkoshi, K.; Ikegami, K.

    2016-02-15

    The Japan Proton Accelerator Research Complex (J-PARC) cesiated RF-driven H{sup −} ion source has been successfully operated for about one year. By the world’s brightest level beam, the J-PARC design beam power of 1 MW was successfully demonstrated. In order to minimize the transverse emittances, the rod-filter-field (RFF) was optimized by changing the triple-gap-lengths of each of pairing five piece rod-filter-magnets. The larger emittance degradation seems to be caused by impurity-gases than the RFF. The smaller beam-hole-diameter of the extraction electrode caused the more than expected improvements on not only the emittances but also the peak beam intensity.

  14. New estimation architecture for multisensor data fusion

    NASA Astrophysics Data System (ADS)

    Covino, Joseph M.; Griffiths, Barry E.

    1991-07-01

    This paper describes a novel method of hierarchical asynchronous distributed filtering called the Net Information Approach (NIA). The NIA is a Kalman-filter-based estimation scheme for spatially distributed sensors which must retain their local optimality yet require a nearly optimal global estimate. The key idea of the NIA is that each local sensor-dedicated filter tells the global filter 'what I've learned since the last local-to-global transmission,' whereas in other estimation architectures the local-to-global transmission consists of 'what I think now.' An algorithm based on this idea has been demonstrated on a small-scale target-tracking problem with many encouraging results. Feasibility of this approach was demonstrated by comparing NIA performance to an optimal centralized Kalman filter (lower bound) via Monte Carlo simulations.

  15. PSO Algorithm Particle Filters for Improving the Performance of Lane Detection and Tracking Systems in Difficult Roads

    PubMed Central

    Cheng, Wen-Chang

    2012-01-01

    In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453

  16. Using frequency analysis to improve the precision of human body posture algorithms based on Kalman filters.

    PubMed

    Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G

    2016-05-01

    With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Microfabrication of three-dimensional filters for liposome extrusion

    NASA Astrophysics Data System (ADS)

    Baldacchini, Tommaso; Nuñez, Vicente; LaFratta, Christopher N.; Grech, Joseph S.; Vullev, Valentine I.; Zadoyan, Ruben

    2015-03-01

    Liposomes play a relevant role in the biomedical field of drug delivery. The ability of these lipid vesicles to encapsulate and transport a variety of bioactive molecules has fostered their use in several therapeutic applications, from cancer treatments to the administration of drugs with antiviral activities. Size and uniformity are key parameters to take into consideration when preparing liposomes; these factors greatly influence their effectiveness in both in vitro and in vivo experiments. A popular technique employed to achieve the optimal liposome dimension (around 100 nm in diameter) and uniform size distribution is repetitive extrusion through a polycarbonate filter. We investigated two femtosecond laser direct writing techniques for the fabrication of three-dimensional filters within a microfluidics chip for liposomes extrusion. The miniaturization of the extrusion process in a microfluidic system is the first step toward a complete solution for lab-on-a-chip preparation of liposomes from vesicles self-assembly to optical characterization.

  18. Measurement of subcellular texture by optical Gabor-like filtering with a digital micromirror device

    PubMed Central

    Pasternack, Robert M.; Qian, Zhen; Zheng, Jing-Yi; Metaxas, Dimitris N.; White, Eileen; Boustany, Nada N.

    2010-01-01

    We demonstrate an optical Fourier processing method to quantify object texture arising from subcellular feature orientation within unstained living cells. Using a digital micromirror device as a Fourier spatial filter, we measured cellular responses to two-dimensional optical Gabor-like filters optimized to sense orientation of nonspherical particles, such as mitochondria, with a width around 0.45 μm. Our method showed significantly rounder structures within apoptosis-defective cells lacking the proapoptotic mitochondrial effectors Bax and Bak, when compared with Bax/Bak expressing cells functional for apoptosis, consistent with reported differences in mitochondrial shape in these cells. By decoupling spatial frequency resolution from image resolution, this method enables rapid analysis of nonspherical submicrometer scatterers in an under-sampled large field of view and yields spatially localized morphometric parameters that improve the quantitative assessment of biological function. PMID:18830354

  19. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  20. Regularized Filters for L1-Norm-Based Common Spatial Patterns.

    PubMed

    Wang, Haixian; Li, Xiaomeng

    2016-02-01

    The l1 -norm-based common spatial patterns (CSP-L1) approach is a recently developed technique for optimizing spatial filters in the field of electroencephalogram (EEG)-based brain computer interfaces. The l1 -norm-based expression of dispersion in CSP-L1 alleviates the negative impact of outliers. In this paper, we further improve the robustness of CSP-L1 by taking into account noise which does not necessarily have as large a deviation as with outliers. The noise modelling is formulated by using the waveform length of the EEG time course. With the noise modelling, we then regularize the objective function of CSP-L1, in which the l1-norm is used in two folds: one is the dispersion and the other is the waveform length. An iterative algorithm is designed to resolve the optimization problem of the regularized objective function. A toy illustration and the experiments of classification on real EEG data sets show the effectiveness of the proposed method.

  1. GaN nanostructure design for optimal dislocation filtering

    NASA Astrophysics Data System (ADS)

    Liang, Zhiwen; Colby, Robert; Wildeson, Isaac H.; Ewoldt, David A.; Sands, Timothy D.; Stach, Eric A.; García, R. Edwin

    2010-10-01

    The effect of image forces in GaN pyramidal nanorod structures is investigated to develop dislocation-free light emitting diodes (LEDs). A model based on the eigenstrain method and nonlocal stress is developed to demonstrate that the pyramidal nanorod efficiently ejects dislocations out of the structure. Two possible regimes of filtering behavior are found: (1) cap-dominated and (2) base-dominated. The cap-dominated regime is shown to be the more effective filtering mechanism. Optimal ranges of fabrication parameters that favor a dislocation-free LED are predicted and corroborated by resorting to available experimental evidence. The filtering probability is summarized as a function of practical processing parameters: the nanorod radius and height. The results suggest an optimal nanorod geometry with a radius of ˜50b (26 nm) and a height of ˜125b (65 nm), in which b is the magnitude of the Burgers vector for the GaN system studied. A filtering probability of greater than 95% is predicted for the optimal geometry.

  2. Weighted finite impulse response filter for chromatic dispersion equalization in coherent optical fiber communication systems

    NASA Astrophysics Data System (ADS)

    Zeng, Ziyi; Yang, Aiying; Guo, Peng; Feng, Lihui

    2018-01-01

    Time-domain CD equalization using finite impulse response (FIR) filter is now a common approach for coherent optical fiber communication systems. The complex weights of FIR taps are calculated from a truncated impulse response of the CD transfer function, and the modulus of the complex weights is constant. In our work, we take the limited bandwidth of a single channel signal into account and propose weighted FIRs to improve the performance of CD equalization. The key in weighted FIR filters is the selection and optimization of weighted functions. In order to present the performance of different types of weighted FIR filters, a square-root raised cosine FIR (SRRC-FIR) and a Gaussian FIR (GS-FIR) are investigated. The optimization of square-root raised cosine FIR and Gaussian FIR are made in term of the bit rate error (BER) of QPSK and 16QAM coherent detection signal. The results demonstrate that the optimized parameters of the weighted filters are independent of the modulation format, symbol rate and the length of transmission fiber. With the optimized weighted FIRs, the BER of CD equalization signal is decreased significantly. Although this paper has investigated two types of weighted FIR filters, i.e. SRRC-FIR filter and GS-FIR filter, the principle of weighted FIR can also be extended to other symmetric functions super Gaussian function, hyperbolic secant function and etc.

  3. Design of order statistics filters using feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu. S.; Bochkarev, V. V.

    2016-08-01

    In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.

  4. The optimal digital filters of sine and cosine transforms for geophysical transient electromagnetic method

    NASA Astrophysics Data System (ADS)

    Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo

    2018-03-01

    The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.

  5. Discriminative Learning of Receptive Fields from Responses to Non-Gaussian Stimulus Ensembles

    PubMed Central

    Meyer, Arne F.; Diepenbrock, Jan-Philipp; Happel, Max F. K.; Ohl, Frank W.; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design. PMID:24699631

  6. Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.

    PubMed

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.

  7. Experimental Measurement of Small Scale Multirotor Flows

    NASA Astrophysics Data System (ADS)

    Connors, Jacob; Weiner, Joseph; Velarde, John-Michael; Glauser, Mark

    2017-11-01

    Work is being done to create a multirotor Unmanned Air Vehicle (UAV) based anemometer system that would allow for measurement of velocity and spectra in the atmospheric boundary layer. The flow from the UAV's rotors will impact such measurements and hence must be filtered. This study focuses on measuring the fluctuations of the velocity field in the flow both above and below various UAVs to determine first, the feasibility of the creation of the filter, and second, the optimal placement of the system on the body of the UAV. These measurements are taking place in both Syracuse University's subsonic wind tunnel and Skytop Turbulence Lab's Indoor Flow Lab. Constant Temperature Anemometry is being used to measure these velocity field fluctuations across a variety of UAVs with differing characteristics such as size, number of propellers, and rotor blade type. The data from these experiments is being used to define a method to estimate the filter band required to isolate noise from wake effects, and determine ideal sensor placement based on characteristics of the vehicle's design alone. The authors would like to thank The Center for Advanced Systems and Engineering (CASE) at Syracuse University for funding and supporting this work.

  8. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  9. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  10. SU-E-I-57: Evaluation and Optimization of Effective-Dose Using Different Beam-Hardening Filters in Clinical Pediatric Shunt CT Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gill, K; Aldoohan, S; Collier, J

    Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measuremore » CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.« less

  11. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    PubMed Central

    Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  12. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  13. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.

    PubMed

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M

    2018-04-12

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.

  14. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes

    PubMed Central

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.

    2018-01-01

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114

  15. Optimizing of a high-order digital filter using PSO algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Fuchun

    2018-04-01

    A self-adaptive high-order digital filter, which offers opportunity to simplify the process of tuning parameters and further improve the noise performance, is presented in this paper. The parameters of traditional digital filter are mainly tuned by complex calculation, whereas this paper presents a 5th order digital filter to obtain outstanding performance and the parameters of the proposed filter are optimized by swarm intelligent algorithm. Simulation results with respect to the proposed 5th order digital filter, SNR>122dB and the noise floor under -170dB are obtained in frequency range of [5-150Hz]. In further simulation, the robustness of the proposed 5th order digital is analyzed.

  16. CFD modeling of catheter-based Chemofilter device for filtering chemotherapy drugs from venous flow

    NASA Astrophysics Data System (ADS)

    Maani, Nazanin; Yee, Daryl; Nosonovsky, Michael; Greer, Julia; Hetts, Steven; Rayz, Vitaliy

    2017-11-01

    Purpose: Intra-arterial chemotherapy, a procedure where drugs are injected into arteries supplying a tumor, may cause systemic toxicity. The Chemofilter device, deployed in a vein downstream of the tumor, can chemically filter the excessive drugs from the circulation. In our study, CFD modeling of blood flow through the Chemofilter is used to optimize its hemodynamic performance. Methods:The Chemofilter consists of a porous membrane attached to a stent-like frame of the RX Accunet distal protection filters used for capturing blood clots. The membrane is formed by a lattice of symmetric micro-cells. This design provides a large surface area for the drug binding, and allows blood cells to pass through the lattice. A two-scale modeling approach is used, where the flow through individual micro-cells is simulated to determine the lattice permeability and then the entire device is modeled as a porous membrane. Results: The simulations detected regions of flow stagnation and recirculation caused by the membrane and its supporting frame. The effect of the membrane's leading angle on the velocity and pressure fields was determined. The device optimization will help the efficacy of drug absorption, while the risk of blood clotting reduces. NIH NCI R01CA194533.

  17. Filter-feeding, near-field flows, and the morphologies of colonial choanoflagellates

    NASA Astrophysics Data System (ADS)

    Kirkegaard, Julius B.; Goldstein, Raymond E.

    2016-11-01

    Efficient uptake of prey and nutrients from the environment is an important component in the fitness of all microorganisms, and its dependence on size may reveal clues to the origins of evolutionary transitions to multicellularity. Because potential benefits in uptake rates must be viewed in the context of other costs and benefits of size, such as varying predation rates and the increased metabolic costs associated with larger and more complex body plans, the uptake rate itself is not necessarily that which is optimized by evolution. Uptake rates can be strongly dependent on local organism geometry and its swimming speed, providing selective pressure for particular arrangements. Here we examine these issues for choanoflagellates, filter-feeding microorganisms that are the closest relatives of the animals. We explore the different morphological variations of the choanoflagellate Salpingoeca rosetta, which can exist as a swimming cell, as a sessile thecate cell, and as colonies of cells in various shapes. In the absence of other requirements and in a homogeneously nutritious environment, we find that the optimal strategy to maximize filter-feeding by the collar of microvilli is to swim fast, which favors swimming unicells. In large external flows, the sessile thecate cell becomes advantageous. Effects of prey diffusion are discussed and also found to be to the advantage of the swimming unicell.

  18. FAF-Drugs3: a web server for compound property calculation and chemical library design

    PubMed Central

    Lagorce, David; Sperandio, Olivier; Baell, Jonathan B.; Miteva, Maria A.; Villoutreix, Bruno O.

    2015-01-01

    Drug attrition late in preclinical or clinical development is a serious economic problem in the field of drug discovery. These problems can be linked, in part, to the quality of the compound collections used during the hit generation stage and to the selection of compounds undergoing optimization. Here, we present FAF-Drugs3, a web server that can be used for drug discovery and chemical biology projects to help in preparing compound libraries and to assist decision-making during the hit selection/lead optimization phase. Since it was first described in 2006, FAF-Drugs has been significantly modified. The tool now applies an enhanced structure curation procedure, can filter or analyze molecules with user-defined or eight predefined physicochemical filters as well as with several simple ADMET (absorption, distribution, metabolism, excretion and toxicity) rules. In addition, compounds can be filtered using an updated list of 154 hand-curated structural alerts while Pan Assay Interference compounds (PAINS) and other, generally unwanted groups are also investigated. FAF-Drugs3 offers access to user-friendly html result pages and the possibility to download all computed data. The server requires as input an SDF file of the compounds; it is open to all users and can be accessed without registration at http://fafdrugs3.mti.univ-paris-diderot.fr. PMID:25883137

  19. Factors affecting laboratory acclimatization of field collected Lymnaea (Bullastra) cumingiana Pfeiffer (Pulmonata: Lymnaeidae).

    PubMed

    Monzon, R B; Kitikoon, V

    1991-12-01

    Lymnaea (Bullastra) cumingiana, the newly discovered natural second intermediate host of Echinostoma malayanum in the Philippines, is a sensitive and delicate lymnaeid species which requires certain conditions for successful transport from the field and cultivation in the laboratory. Field collected specimens were found to be best transported in styrofoam containers lined with wet filter paper or containing natural substrate and vegetation instead of Sphagnum moss. The method is convenient and produces a survival rate of 73-86%. However, transport time is crucial and mortality increases the longer the snails are in transit. For optimal results in laboratory acclimatization, snails are best raised in wide-mouthed containers providing a large exposed water surface area. Adequate aeration is advised but vigorous bubbling of the water should be avoided. Water should be replaced with filtered dechlorinated water every 2 to 3 days, depending on water quality. A combination of fresh lettuce leaves and a few flakes of fish food was found to be ideal. Lastly, population density was the most significant factor affecting survival and so overcrowding should be avoided.

  20. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  1. Novel Cleanup Agents Designed Exclusively for Oil Field Membrane Filtration Systems Low Cost Field Demonstrations of Cleanup Agents in Controlled Experimental Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Burnett; Harold Vance

    2007-08-31

    The goal of our project is to develop innovative processes and novel cleaning agents for water treatment facilities designed to remove fouling materials and restore micro-filter and reverse osmosis (RO) membrane performance. This project is part of Texas A&M University's comprehensive study of the treatment and reuse of oilfield brine for beneficial purposes. Before waste water can be used for any beneficial purpose, it must be processed to remove contaminants, including oily wastes such as residual petroleum hydrocarbons. An effective way of removing petroleum from brines is the use of membrane filters to separate oily waste from the brine. Texasmore » A&M and its partners have developed highly efficient membrane treatment and RO desalination for waste water including oil field produced water. We have also developed novel and new cleaning agents for membrane filters utilizing environmentally friendly materials so that the water from the treatment process will meet U.S. EPA drinking water standards. Prototype micellar cleaning agents perform better and use less clean water than alternate systems. While not yet optimized, the new system restores essentially complete membrane flux and separation efficiency after cleaning. Significantly the amount of desalinated water that is required to clean the membranes is reduced by more than 75%.« less

  2. Methodology to estimate the relative pressure field from noisy experimental velocity data

    NASA Astrophysics Data System (ADS)

    Bolin, C. D.; Raguin, L. G.

    2008-11-01

    The determination of intravascular pressure fields is important to the characterization of cardiovascular pathology. We present a two-stage method that solves the inverse problem of estimating the relative pressure field from noisy velocity fields measured by phase contrast magnetic resonance imaging (PC-MRI) on an irregular domain with limited spatial resolution, and includes a filter for the experimental noise. For the pressure calculation, the Poisson pressure equation is solved by embedding the irregular flow domain into a regular domain. To lessen the propagation of the noise inherent to the velocity measurements, three filters - a median filter and two physics-based filters - are evaluated using a 2-D Couette flow. The two physics-based filters outperform the median filter for the estimation of the relative pressure field for realistic signal-to-noise ratios (SNR = 5 to 30). The most accurate pressure field results from a filter that applies in a least-squares sense three constraints simultaneously: consistency between measured and filtered velocity fields, divergence-free and additional smoothness conditions. This filter leads to a 5-fold gain in accuracy for the estimated relative pressure field compared to without noise filtering, in conditions consistent with PC-MRI of the carotid artery: SNR = 5, 20 x 20 discretized flow domain (25 X 25 computational domain).

  3. Applications of Bayesian spectrum representation in acoustics

    NASA Astrophysics Data System (ADS)

    Botts, Jonathan M.

    This dissertation utilizes a Bayesian inference framework to enhance the solution of inverse problems where the forward model maps to acoustic spectra. A Bayesian solution to filter design inverts a acoustic spectra to pole-zero locations of a discrete-time filter model. Spatial sound field analysis with a spherical microphone array is a data analysis problem that requires inversion of spatio-temporal spectra to directions of arrival. As with many inverse problems, a probabilistic analysis results in richer solutions than can be achieved with ad-hoc methods. In the filter design problem, the Bayesian inversion results in globally optimal coefficient estimates as well as an estimate the most concise filter capable of representing the given spectrum, within a single framework. This approach is demonstrated on synthetic spectra, head-related transfer function spectra, and measured acoustic reflection spectra. The Bayesian model-based analysis of spatial room impulse responses is presented as an analogous problem with equally rich solution. The model selection mechanism provides an estimate of the number of arrivals, which is necessary to properly infer the directions of simultaneous arrivals. Although, spectrum inversion problems are fairly ubiquitous, the scope of this dissertation has been limited to these two and derivative problems. The Bayesian approach to filter design is demonstrated on an artificial spectrum to illustrate the model comparison mechanism and then on measured head-related transfer functions to show the potential range of application. Coupled with sampling methods, the Bayesian approach is shown to outperform least-squares filter design methods commonly used in commercial software, confirming the need for a global search of the parameter space. The resulting designs are shown to be comparable to those that result from global optimization methods, but the Bayesian approach has the added advantage of a filter length estimate within the same unified framework. The application to reflection data is useful for representing frequency-dependent impedance boundaries in finite difference acoustic simulations. Furthermore, since the filter transfer function is a parametric model, it can be modified to incorporate arbitrary frequency weighting and account for the band-limited nature of measured reflection spectra. Finally, the model is modified to compensate for dispersive error in the finite difference simulation, from the filter design process. Stemming from the filter boundary problem, the implementation of pressure sources in finite difference simulation is addressed in order to assure that schemes properly converge. A class of parameterized source functions is proposed and shown to offer straightforward control of residual error in the simulation. Guided by the notion that the solution to be approximated affects the approximation error, sources are designed which reduce residual dispersive error to the size of round-off errors. The early part of a room impulse response can be characterized by a series of isolated plane waves. Measured with an array of microphones, plane waves map to a directional response of the array or spatial intensity map. Probabilistic inversion of this response results in estimates of the number and directions of image source arrivals. The model-based inversion is shown to avoid ambiguities associated with peak-finding or inspection of the spatial intensity map. For this problem, determining the number of arrivals in a given frame is critical for properly inferring the state of the sound field. This analysis is effectively compression of the spatial room response, which is useful for analysis or encoding of the spatial sound field. Parametric, model-based formulations of these problems enhance the solution in all cases, and a Bayesian interpretation provides a principled approach to model comparison and parameter estimation. v

  4. Optimized Orthovoltage Stereotactic Radiosurgery

    NASA Astrophysics Data System (ADS)

    Fagerstrom, Jessica M.

    Because of its ability to treat intracranial targets effectively and noninvasively, stereotactic radiosurgery (SRS) is a prevalent treatment modality in modern radiation therapy. This work focused on SRS delivering rectangular function dose distributions, which are desirable for some targets such as those with functional tissue included within the target volume. In order to achieve such distributions, this work used fluence modulation and energies lower than those utilized in conventional SRS. In this work, the relationship between prescription isodose and dose gradients was examined for standard, unmodulated orthovoltage SRS dose distributions. Monte Carlo-generated energy deposition kernels were used to calculate 4pi, isocentric dose distributions for a polyenergetic orthovoltage spectrum, as well as monoenergetic orthovoltage beams. The relationship between dose gradients and prescription isodose was found to be field size and energy dependent, and values were found for prescription isodose that optimize dose gradients. Next, a pencil-beam model was used with a Genetic Algorithm search heuristic to optimize the spatial distribution of added tungsten filtration within apertures of cone collimators in a moderately filtered 250 kVp beam. Four cone sizes at three depths were examined with a Monte Carlo model to determine the effects of the optimized modulation compared to open cones, and the simulations found that the optimized cones were able to achieve both improved penumbra and flatness statistics at depth compared to the open cones. Prototypes of the filter designs calculated using mathematical optimization techniques and Monte Carlo simulations were then manufactured and inserted into custom built orthovoltage SRS cone collimators. A positioning system built in-house was used to place the collimator and filter assemblies temporarily in the 250 kVp beam line. Measurements were performed in water using radiochromic film scanned with both a standard white light flatbed scanner as well as a prototype laser densitometry system. Measured beam profiles showed that the modulated beams could more closely approach rectangular function dose profiles compared to the open cones. A methodology has been described and implemented to achieve optimized SRS delivery, including the development of working prototypes. Future work may include the construction of a full treatment platform.

  5. HARDI denoising using nonlocal means on S2

    NASA Astrophysics Data System (ADS)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  6. Improved DQE by means of X-ray spectra and scintillator optimization for FFDM

    NASA Astrophysics Data System (ADS)

    Job, Isaias D.; Taie-Nobraie, Nima; Colbeth, Richard E.; Mollov, Ivan; Gray, Keith D.; Webb, Chris; Pavkovich, John M.; Zoghi, Fred; Tognina, Carlo A.; Roos, Pieter G.

    2012-03-01

    The focus of this work was to improve the DQE performance of a full-field digital mammography (FFDM) system by means of selecting an optimal X-ray tube anode-filter combination in conjunction with an optimal scintillator configuration. The flat panel detector in this work is a Varian PaxScan 3024M. The detector technology is comprised of a 2816 row × 3584 column amorphous silicon (a-Si) photodiode array with a pixel pitch of 83μm. The scintillator is cesium iodide and is deposited directly onto the photodiode array and available with configurable optical and x-ray properties. Two X-ray beam spectra were generated with the anode/filter combinations, Molybdenum/Molybdenum (Mo/Mo) and Tungsten/Aluminum (W/Al), to evaluate the imaging performance of two types of scintillators, high resolution (HR) type and high light output (HL) type. The results for the HR scintillator with W/Al anode-filter (HRW/ Al) yielded a DQE(0) of 67%, while HR-Mo/Mo was lower with a DQE(0) of 50%. In addition, the DQE(0) of the HR-W/Al configuration was comparable to the DQE(0) of the HL-Mo/Mo configuration. The significance of this result is the HR type scintillator yields about twice the light output with the W/Al spectrum, at about half the dose, as compared to the Mo/Mo spectrum. The light output or sensitivity was measured in analog-to-digital convertor units (ADU) per dose. The sensitivities (ADU/uGy) were 8.6, 16.8 and 25.4 for HR-Mo/Mo, HR-W/Al, HL-Mo/Mo, respectively. The Nyquist frequency for the 83 μm pixel is 6 lp/mm. The MTF at 5 lp/mm for HR-Mo/Mo and HR-W/Al were equivalent at 37%, while the HL-Mo/Mo MTF was 24%. According to the DQE metric, the more favorable anodefilter combination was W/Al with the HR scintillator. Future testing will evaluate the HL-W/Al configuration, as well as other x-ray filters materials and other scintillator optimizations. While higher DQE values were achieved, the more general conclusion is that the imaging performance can be tuned as required by the application by modifying optical and x-ray properties of the scintillator to match the spectral output of the chosen anode-filter combination.

  7. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm

    PubMed Central

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168

  8. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  9. A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters.

    PubMed

    Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Serrano, Alejandro; Godoy, Jorge; Martínez-Álvarez, Antonio; Villagra, Jorge

    2017-11-11

    Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle.

  10. Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.

    PubMed

    Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal

    2013-11-01

    In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Cooperative Localization on Computationally Constrained Devices

    DTIC Science & Technology

    2012-03-22

    Fi hotspot capability. The HTC phone is equipped with the Qualcomm MSM7200A chipset which includes support for 802.11 b/g, digital compass and...Chipset Specifications Wi-Fi Qualcomm MSM7200A +802.11 b/g Bluetooth Qualcomm MSM7200A -Version 2.0 + EDR Accelerometer Bosh BMA 150 +25-1500Hz...Magnetic Field Compensation GPS Qualcomm MSM7200A +Enhanced filtering software to optimize accuracy +gpsOneXTRA for enhanced standalone

  12. Assessment of intermittently loaded woodchip and sand filters to treat dairy soiled water.

    PubMed

    Murnane, J G; Brennan, R B; Healy, M G; Fenton, O

    2016-10-15

    Land application of dairy soiled water (DSW) is expensive relative to its nutrient replacement value. The use of aerobic filters is an effective alternative method of treatment and potentially allows the final effluent to be reused on the farm. Knowledge gaps exist concerning the optimal design and operation of filters for the treatment of DSW. To address this, 18 laboratory-scale filters, with depths of either 0.6 m or 1 m, were intermittently loaded with DSW over periods of up to 220 days to evaluate the impacts of depth (0.6 m versus 1 m), organic loading rates (OLRs) (50 versus 155 g COD m(-2) d(-1)), and media type (woodchip versus sand) on organic, nutrient and suspended solids (SS) removals. The study found that media depth was important in contaminant removal in woodchip filters. Reductions of 78% chemical oxygen demand (COD), 95% SS, 85% total nitrogen (TN), 82% ammonium-nitrogen (NH4N), 50% total phosphorus (TP), and 54% dissolved reactive phosphorus (DRP) were measured in 1 m deep woodchip filters, which was greater than the reductions in 0.6 m deep woodchip filters. Woodchip filters also performed optimally when loaded at a high OLR (155 g COD m(-2) d(-1)), although the removal mechanism was primarily physical (i.e. straining) as opposed to biological. When operated at the same OLR and when of the same depth, the sand filters had better COD removals (96%) than woodchip (74%), but there was no significant difference between them in the removal of SS and NH4N. However, the likelihood of clogging makes sand filters less desirable than woodchip filters. Using the optimal designs of both configurations, the filter area required per cow for a woodchip filter is more than four times less than for a sand filter. Therefore, this study found that woodchip filters are more economically and environmentally effective in the treatment of DSW than sand filters, and optimal performance may be achieved using woodchip filters with a depth of at least 1 m, operated at an OLR of 155 g COD m(-2) d(-1). Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Decision-theoretic saliency: computational principles, biological plausibility, and implications for neurophysiology and psychophysics.

    PubMed

    Gao, Dashan; Vasconcelos, Nuno

    2009-01-01

    A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.

  14. Performance evaluation of an asynchronous multisensor track fusion filter

    NASA Astrophysics Data System (ADS)

    Alouani, Ali T.; Gray, John E.; McCabe, D. H.

    2003-08-01

    Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.

  15. Long-term effects on symptoms by reducing electric fields from visual display units.

    PubMed

    Oftedal, G; Nyvang, A; Moen, B E

    1999-10-01

    The purpose of the study was to see whether the results of an earlier study [ie, that skin symptoms were reduced by reducing electric fields from visual display units (VDU)] could be reproduced or not. In addition, an attempt was made to determine whether eye symptoms and symptoms from the nervous system could be reduced by reducing VDU electric fields. The study was designed as a controlled double-blind intervention. The electric fields were reduced by using electric-conducting screen filters. Forty-two persons completed the study while working at their ordinary job, first 1 week with no filter, then 3 months with an inactive filter and then 3 months with an active filter (or in reverse order). The inactive filters were identical to the active ones, except that their ground cables were replaced by empty plastic insulation. The inactive filters did not reduce the fields from the VDU. The fields were significantly lower with active filters than with inactive filters. Most of the symptoms were statistically significantly less pronounced in the periods with the filters when compared with the period with no filter. This finding can be explained by visual effects and psychological effects. No statistically significant difference in symptom severeness was observed between the period with an inactive filter and the one with an active filter. The study does not support the hypothesis that skin, eye, or nervous system symptoms can be reduced by reducing VDU electric fields.

  16. Event-triggered resilient filtering with stochastic uncertainties and successive packet dropouts via variance-constrained approach

    NASA Astrophysics Data System (ADS)

    Jia, Chaoqing; Hu, Jun; Chen, Dongyan; Liu, Yurong; Alsaadi, Fuad E.

    2018-07-01

    In this paper, we discuss the event-triggered resilient filtering problem for a class of time-varying systems subject to stochastic uncertainties and successive packet dropouts. The event-triggered mechanism is employed with hope to reduce the communication burden and save network resources. The stochastic uncertainties are considered to describe the modelling errors and the phenomenon of successive packet dropouts is characterized by a random variable obeying the Bernoulli distribution. The aim of the paper is to provide a resilient event-based filtering approach for addressed time-varying systems such that, for all stochastic uncertainties, successive packet dropouts and filter gain perturbation, an optimized upper bound of the filtering error covariance is obtained by designing the filter gain. Finally, simulations are provided to demonstrate the effectiveness of the proposed robust optimal filtering strategy.

  17. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  18. Optimal sharpening of compensated comb decimation filters: analysis and design.

    PubMed

    Troncoso Romero, David Ernesto; Laddomada, Massimiliano; Jovanovic Dolecek, Gordana

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature.

  19. Optimal filter parameters for low SNR seismograms as a function of station and event location

    NASA Astrophysics Data System (ADS)

    Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.

    1999-06-01

    Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.

  20. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  1. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  2. Adapted all-numerical correlator for face recognition applications

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.

    2013-03-01

    In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.

  3. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  4. Optimization of the cleaning process on a pilot filtration setup for waste water treatment accompanied by flow visualization

    NASA Astrophysics Data System (ADS)

    Bílek, Petr; Hrůza, Jakub

    2018-06-01

    This paper deals with an optimization of the cleaning process on a liquid flat-sheet filter accompanied by visualization of the inlet side of a filter. The cleaning process has a crucial impact on the hydrodynamic properties of flat-sheet filters. Cleaning methods avoid depositing of particles on the filter surface and forming a filtration cake. Visualization significantly helps to optimize the cleaning methods, because it brings new overall view on the filtration process in time. The optical method, described in the article, enables to see flow behaviour in a thin laser sheet on the inlet side of a tested filter during the cleaning process. Visualization is a strong tool for investigation of the processes on filters in details and it is also possible to determine concentration of particles after an image analysis. The impact of air flow rate, inverse pressure drop and duration on the cleaning mechanism is investigated in the article. Images of the cleaning process are compared to the hydrodynamic data. The tests are carried out on a pilot filtration setup for waste water treatment.

  5. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.

  6. Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui

    2017-05-01

    The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.

  7. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Welch, Greg

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  8. Label-free DNA quantification via a 'pipette, aggregate and blot' (PAB) approach with magnetic silica particles on filter paper.

    PubMed

    Li, Jingyi; Liu, Qian; Alsamarri, Hussein; Lounsbury, Jenny A; Haversitick, Doris M; Landers, James P

    2013-03-07

    Reliable measurement of DNA concentration is essential for a broad range of applications in biology and molecular biology, and for many of these, quantifying the nucleic acid content is inextricably linked to obtaining optimal results. In its most simplistic form, quantitative analysis of nucleic acids can be accomplished by UV-Vis absorbance and, in more sophisticated format, by fluorimetry. A recently reported new concept, the 'pinwheel assay', involves a label-free approach for quantifying DNA through aggregation of paramagnetic beads in a rotating magnetic field. Here, we describe a simplified version of that assay adapted for execution using only a pipet and filter paper. The 'pipette, aggregate, and blot' (PAB) approach allows DNA to induce bead aggregation in a pipette tip through exposure to a magnetic field, followed by dispensing (blotting) onto filter paper. The filter paper immortalises the extent of aggregation, and digital images of the immortalized bead conformation, acquired with either a document scanner or a cell phone camera, allows for DNA quantification using a noncomplex algorithm. Human genomic DNA samples extracted from blood are quantified with the PAB approach and the results utilized to define the volume of sample used in a PCR reaction that is sensitive to input mass of template DNA. Integrating the PAB assay with paper-based DNA extraction and detection modalities has the potential to yield 'DNA quant-on-paper' devices that may be useful for point-of-care testing.

  9. Linear Quantum Systems: Non-Classical States and Robust Stability

    DTIC Science & Technology

    2016-06-29

    has a history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control ...information   if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. 1. REPORT DATE (DD...analysis and control of quantum linear systems and their interactions with non-classical quantum fields by developing control theoretic concepts exploiting

  10. A New Methodology for the Extension of the Impact of Data Assimilation on Ocean Wave Prediction

    DTIC Science & Technology

    2008-07-01

    Assimilation method The analysis fields used were corrected by an assimilation method developed at the Norwegian Meteorological Insti- tute ( Breivik and Reistad...523–535 525 becomes equal to the solution obtained by optimal interpolation (see Bratseth 1986 and Breivik and Reistad 1994). The iterations begin with...updated accordingly. A more detailed description of the assimilation method is given in Breivik and Reistad (1994). 2.3 Kolmogorov–Zurbenko filters

  11. Performance Assessment of Different Pulse Reconstruction Algorithms for the ATHENA X-Ray Integral Field Unit

    NASA Technical Reports Server (NTRS)

    Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle; hide

    2016-01-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  12. Performance Limits of Non-Line-of-Sight Optical Communications

    DTIC Science & Technology

    2015-05-01

    high efficiency solar blind photo detectors. In this project, we address the main challenges towards optimizing the UV communication system...LEDs), solar blind filters, and high efficiency solar blind photo detectors. In this project, we address the main challenges towards optimizing the UV...solar blind filters, and high efficiency solar blind photo detectors. In this project, we address the main challenges towards optimizing the UV

  13. Lightweight filter architecture for energy efficient mobile vehicle localization based on a distributed acoustic sensor network.

    PubMed

    Kim, Keonwook

    2013-08-23

    The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably.

  14. An efficient method for removing point sources from full-sky radio interferometric maps

    NASA Astrophysics Data System (ADS)

    Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard

    2017-12-01

    A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.

  15. A Robust Kalman Framework with Resampling and Optimal Smoothing

    PubMed Central

    Kautz, Thomas; Eskofier, Bjoern M.

    2015-01-01

    The Kalman filter (KF) is an extremely powerful and versatile tool for signal processing that has been applied extensively in various fields. We introduce a novel Kalman-based analysis procedure that encompasses robustness towards outliers, Kalman smoothing and real-time conversion from non-uniformly sampled inputs to a constant output rate. These features have been mostly treated independently, so that not all of their benefits could be exploited at the same time. Here, we present a coherent analysis procedure that combines the aforementioned features and their benefits. To facilitate utilization of the proposed methodology and to ensure optimal performance, we also introduce a procedure to calculate all necessary parameters. Thereby, we substantially expand the versatility of one of the most widely-used filtering approaches, taking full advantage of its most prevalent extensions. The applicability and superior performance of the proposed methods are demonstrated using simulated and real data. The possible areas of applications for the presented analysis procedure range from movement analysis over medical imaging, brain-computer interfaces to robot navigation or meteorological studies. PMID:25734647

  16. Multiple vehicle tracking in aerial video sequence using driver behavior analysis and improved deterministic data association

    NASA Astrophysics Data System (ADS)

    Zhang, Xunxun; Xu, Hongke; Fang, Jianwu

    2018-01-01

    Along with the rapid development of the unmanned aerial vehicle technology, multiple vehicle tracking (MVT) in aerial video sequence has received widespread interest for providing the required traffic information. Due to the camera motion and complex background, MVT in aerial video sequence poses unique challenges. We propose an efficient MVT algorithm via driver behavior-based Kalman filter (DBKF) and an improved deterministic data association (IDDA) method. First, a hierarchical image registration method is put forward to compensate the camera motion. Afterward, to improve the accuracy of the state estimation, we propose the DBKF module by incorporating the driver behavior into the Kalman filter, where artificial potential field is introduced to reflect the driver behavior. Then, to implement the data association, a local optimization method is designed instead of global optimization. By introducing the adaptive operating strategy, the proposed IDDA method can also deal with the situation in which the vehicles suddenly appear or disappear. Finally, comprehensive experiments on the DARPA VIVID data set and KIT AIS data set demonstrate that the proposed algorithm can generate satisfactory and superior results.

  17. On the ``optimal'' spatial distribution and directional anisotropy of the filter-width and grid-resolution in large eddy simulation

    NASA Astrophysics Data System (ADS)

    Toosi, Siavash; Larsson, Johan

    2017-11-01

    The accuracy of an LES depends directly on the accuracy of the resolved part of the turbulence. The continuing increase in computational power enables the application of LES to increasingly complex flow problems for which the LES community lacks the experience of knowing what the ``optimal'' or even an ``acceptable'' grid (or equivalently filter-width distribution) is. The goal of this work is to introduce a systematic approach to finding the ``optimal'' grid/filter-width distribution and their ``optimal'' anisotropy. The method is tested first on the turbulent channel flow, mainly to see if it is able to predict the right anisotropy of the filter/grid, and then on the more complicated case of flow over a backward-facing step, to test its ability to predict the right distribution and anisotropy of the filter/grid simultaneously, hence leading to a converged solution. This work has been supported by the Naval Air Warfare Center Aircraft Division at Pax River, MD, under contract N00421132M021. Computing time has been provided by the University of Maryland supercomputing resources (http://hpcc.umd.edu).

  18. A nowcasting technique based on application of the particle filter blending algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  19. High resolution near on-axis digital holography using constrained optimization approach with faster convergence

    NASA Astrophysics Data System (ADS)

    Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu

    2017-09-01

    A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.

  20. Creation of an iOS and Android Mobile Application for Inferior Vena Cava (IVC) Filters: A Powerful Tool to Optimize Care of Patients with IVC Filters

    PubMed Central

    Deso, Steven E.; Idakoji, Ibrahim A.; Muelly, Michael C.; Kuo, William T.

    2016-01-01

    Owing to a myriad of inferior vena cava (IVC) filter types and their potential complications, rapid and correct identification may be challenging when encountered on routine imaging. The authors aimed to develop an interactive mobile application that allows recognition of all IVC filters and related complications, to optimize the care of patients with indwelling IVC filters. The FDA Premarket Notification Database was queried from 1980 to 2014 to identify all IVC filter types in the United States. An electronic search was then performed on MEDLINE and the FDA MAUDE database to identify all reported complications associated with each device. High-resolution photos were taken of each filter type and corresponding computed tomographic and fluoroscopic images were obtained from an institutional review board–approved IVC filter registry. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code. The software was deployed using PhoneGap (Adobe, San Jose, CA), and the prototype was tested and refined. Twenty-three IVC filter types were identified for inclusion. Safety data from FDA MAUDE and 72 relevant peer-reviewed studies were acquired, and complication rates for each filter type were highlighted in the application. Digital photos, fluoroscopic images, and CT DICOM files were seamlessly incorporated. All data were succinctly organized electronically, and the software was successfully deployed into Android (Google, Mountain View, CA) and iOS (Apple, Cupertino, CA) platforms. A powerful electronic mobile application was successfully created to allow rapid identification of all IVC filter types and related complications. This application may be used to optimize the care of patients with IVC filters. PMID:27247483

  1. Creation of an iOS and Android Mobile Application for Inferior Vena Cava (IVC) Filters: A Powerful Tool to Optimize Care of Patients with IVC Filters.

    PubMed

    Deso, Steven E; Idakoji, Ibrahim A; Muelly, Michael C; Kuo, William T

    2016-06-01

    Owing to a myriad of inferior vena cava (IVC) filter types and their potential complications, rapid and correct identification may be challenging when encountered on routine imaging. The authors aimed to develop an interactive mobile application that allows recognition of all IVC filters and related complications, to optimize the care of patients with indwelling IVC filters. The FDA Premarket Notification Database was queried from 1980 to 2014 to identify all IVC filter types in the United States. An electronic search was then performed on MEDLINE and the FDA MAUDE database to identify all reported complications associated with each device. High-resolution photos were taken of each filter type and corresponding computed tomographic and fluoroscopic images were obtained from an institutional review board-approved IVC filter registry. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code. The software was deployed using PhoneGap (Adobe, San Jose, CA), and the prototype was tested and refined. Twenty-three IVC filter types were identified for inclusion. Safety data from FDA MAUDE and 72 relevant peer-reviewed studies were acquired, and complication rates for each filter type were highlighted in the application. Digital photos, fluoroscopic images, and CT DICOM files were seamlessly incorporated. All data were succinctly organized electronically, and the software was successfully deployed into Android (Google, Mountain View, CA) and iOS (Apple, Cupertino, CA) platforms. A powerful electronic mobile application was successfully created to allow rapid identification of all IVC filter types and related complications. This application may be used to optimize the care of patients with IVC filters.

  2. Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

    USGS Publications Warehouse

    Safak, Erdal

    1989-01-01

    This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

  3. Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises

    PubMed Central

    Grama, Ion; Liu, Quansheng

    2017-01-01

    In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667

  4. Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.

    PubMed

    Jin, Qiyu; Grama, Ion; Liu, Quansheng

    2017-01-01

    In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.

  5. A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Meldi, M.; Poux, A.

    2017-10-01

    A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.

  6. A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters

    PubMed Central

    Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Godoy, Jorge; Martínez-Álvarez, Antonio

    2017-01-01

    Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle. PMID:29137137

  7. Comparison of cryogenic low-pass filters.

    PubMed

    Thalmann, M; Pernau, H-F; Strunk, C; Scheer, E; Pietsch, T

    2017-11-01

    Low-temperature electronic transport measurements with high energy resolution require both effective low-pass filtering of high-frequency input noise and an optimized thermalization of the electronic system of the experiment. In recent years, elaborate filter designs have been developed for cryogenic low-level measurements, driven by the growing interest in fundamental quantum-physical phenomena at energy scales corresponding to temperatures in the few millikelvin regime. However, a single filter concept is often insufficient to thermalize the electronic system to the cryogenic bath and eliminate spurious high frequency noise. Moreover, the available concepts often provide inadequate filtering to operate at temperatures below 10 mK, which are routinely available now in dilution cryogenic systems. Herein we provide a comprehensive analysis of commonly used filter types, introduce a novel compact filter type based on ferrite compounds optimized for the frequency range above 20 GHz, and develop an improved filtering scheme providing adaptable broad-band low-pass characteristic for cryogenic low-level and quantum measurement applications at temperatures down to few millikelvin.

  8. Comparison of cryogenic low-pass filters

    NASA Astrophysics Data System (ADS)

    Thalmann, M.; Pernau, H.-F.; Strunk, C.; Scheer, E.; Pietsch, T.

    2017-11-01

    Low-temperature electronic transport measurements with high energy resolution require both effective low-pass filtering of high-frequency input noise and an optimized thermalization of the electronic system of the experiment. In recent years, elaborate filter designs have been developed for cryogenic low-level measurements, driven by the growing interest in fundamental quantum-physical phenomena at energy scales corresponding to temperatures in the few millikelvin regime. However, a single filter concept is often insufficient to thermalize the electronic system to the cryogenic bath and eliminate spurious high frequency noise. Moreover, the available concepts often provide inadequate filtering to operate at temperatures below 10 mK, which are routinely available now in dilution cryogenic systems. Herein we provide a comprehensive analysis of commonly used filter types, introduce a novel compact filter type based on ferrite compounds optimized for the frequency range above 20 GHz, and develop an improved filtering scheme providing adaptable broad-band low-pass characteristic for cryogenic low-level and quantum measurement applications at temperatures down to few millikelvin.

  9. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  10. Study of electron transport across the magnetic filter of NIO1 negative ion source

    NASA Astrophysics Data System (ADS)

    Veltri, P.; Sartori, E.; Cavenago, M.; Serianni, G.; Barbisan, M.; Zaniol, B.

    2017-08-01

    In the framework of the accompanying activities in support to the ITER NBI test facility, a relatively compact radiofrequency (RF) ion source, named NIO1 (Negative Ion Optimization, phase 1) was developed in Padua, Italy, in collaboration between Consorzio RFX and INFN. Negative hydrogen ions are formed in a cold, inductively coupled plasma with a 2MHz, 2.5 kW external antenna. A low electron energy is necessary to increase the survival probability of negative ions in the proximity of the extraction area. This goal is accomplished by means of a transversal magnetic field, confining the high energy electrons better than the colder electrons. In NIO1, this filter field can cover different topologies, exploiting different set of magnets and high current paths. In this contribution we study the property of the plasma in the vicinity of the extraction region for two different B field configurations. For this experiment the source was operated in pure volume conditions, in hydrogen and oxygen plasmas. The experimental data, measured by spectroscopic means, is interpreted also with the support of finite element analyses simulations of the magnetic field and a dedicated particle in cell (PIC) numerical model for the electron transport across it, including Coulomb and gas collisions.

  11. Ares-I Bending Filter Design using a Constrained Optimization Approach

    NASA Technical Reports Server (NTRS)

    Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth

    2008-01-01

    The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.

  12. Optimal design of FIR triplet halfband filter bank and application in image coding.

    PubMed

    Kha, H H; Tuan, H D; Nguyen, T Q

    2011-02-01

    This correspondence proposes an efficient semidefinite programming (SDP) method for the design of a class of linear phase finite impulse response triplet halfband filter banks whose filters have optimal frequency selectivity for a prescribed regularity order. The design problem is formulated as the minimization of the least square error subject to peak error constraints and regularity constraints. By using the linear matrix inequality characterization of the trigonometric semi-infinite constraints, it can then be exactly cast as a SDP problem with a small number of variables and, hence, can be solved efficiently. Several design examples of the triplet halfband filter bank are provided for illustration and comparison with previous works. Finally, the image coding performance of the filter bank is presented.

  13. Optimal design of active EMC filters

    NASA Astrophysics Data System (ADS)

    Chand, B.; Kut, T.; Dickmann, S.

    2013-07-01

    A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.

  14. Optimization of internet content filtering-Combined with KNN and OCAT algorithms

    NASA Astrophysics Data System (ADS)

    Guo, Tianze; Wu, Lingjing; Liu, Jiaming

    2018-04-01

    The face of the status quo that rampant illegal content in the Internet, the result of traditional way to filter information, keyword recognition and manual screening, is getting worse. Based on this, this paper uses OCAT algorithm nested by KNN classification algorithm to construct a corpus training library that can dynamically learn and update, which can be improved on the filter corpus for constantly updated illegal content of the network, including text and pictures, and thus can better filter and investigate illegal content and its source. After that, the research direction will focus on the simplified updating of recognition and comparison algorithms and the optimization of the corpus learning ability in order to improve the efficiency of filtering, save time and resources.

  15. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  16. Application of optimal control theory to the design of the NASA/JPL 70-meter antenna servos

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.; Nickerson, J.

    1989-01-01

    The application of Linear Quadratic Gaussian (LQG) techniques to the design of the 70-m axis servos is described. Linear quadratic optimal control and Kalman filter theory are reviewed, and model development and verification are discussed. Families of optimal controller and Kalman filter gain vectors were generated by varying weight parameters. Performance specifications were used to select final gain vectors.

  17. Adaptive sparsest narrow-band decomposition method and its applications to rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao

    2017-02-01

    Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.

  18. Plate/shell structure topology optimization of orthotropic material for buckling problem based on independent continuous topological variables

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang

    2017-10-01

    The purpose of the present work is to study the buckling problem with plate/shell topology optimization of orthotropic material. A model of buckling topology optimization is established based on the independent, continuous, and mapping method, which considers structural mass as objective and buckling critical loads as constraints. Firstly, composite exponential function (CEF) and power function (PF) as filter functions are introduced to recognize the element mass, the element stiffness matrix, and the element geometric stiffness matrix. The filter functions of the orthotropic material stiffness are deduced. Then these filter functions are put into buckling topology optimization of a differential equation to analyze the design sensitivity. Furthermore, the buckling constraints are approximately expressed as explicit functions with respect to the design variables based on the first-order Taylor expansion. The objective function is standardized based on the second-order Taylor expansion. Therefore, the optimization model is translated into a quadratic program. Finally, the dual sequence quadratic programming (DSQP) algorithm and the global convergence method of moving asymptotes algorithm with two different filter functions (CEF and PF) are applied to solve the optimal model. Three numerical results show that DSQP&CEF has the best performance in the view of structural mass and discretion.

  19. Quantum-behaved particle swarm optimization for the synthesis of fibre Bragg gratings filter

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Sun, Yunxu; Yao, Yong; Tian, Jiajun; Cong, Shan

    2011-12-01

    A method based on the quantum-behaved particle swarm optimization algorithm is presented to design a bandpass filter of the fibre Bragg gratings. In contrast to the other optimization algorithms such as the genetic algorithm and particle swarm optimization algorithm, this method is simpler and easier to implement. To demonstrate the effectiveness of the QPSO algorithm, we consider a bandpass filter. With the parameters the half the bandwidth of the filter 0.05 nm, the Bragg wavelength 1550 nm, the grating length with 2cm is divided into 40 uniform sections and its index modulation is what should be optimized and whole feasible solution space is searched for the index modulation. After the index modulation profile is known for all the sections, the transfer matrix method is used to verify the final optimal index modulation by calculating the refection spectrum. The results show the group delay is less than 12ps in band and the calculated dispersion is relatively flat inside the passband. It is further found that the reflective spectrum has sidelobes around -30dB and the worst in-band dispersion value is less than 200ps/nm . In addition, for this design, it takes approximately several minutes to find the acceptable index modulation values with a notebook computer.

  20. Dem Reconstruction Using Light Field and Bidirectional Reflectance Function from Multi-View High Resolution Spatial Images

    NASA Astrophysics Data System (ADS)

    de Vieilleville, F.; Ristorcelli, T.; Delvit, J.-M.

    2016-06-01

    This paper presents a method for dense DSM reconstruction from high resolution, mono sensor, passive imagery, spatial panchromatic image sequence. The interest of our approach is four-fold. Firstly, we extend the core of light field approaches using an explicit BRDF model from the Image Synthesis community which is more realistic than the Lambertian model. The chosen model is the Cook-Torrance BRDF which enables us to model rough surfaces with specular effects using specific material parameters. Secondly, we extend light field approaches for non-pinhole sensors and non-rectilinear motion by using a proper geometric transformation on the image sequence. Thirdly, we produce a 3D volume cost embodying all the tested possible heights and filter it using simple methods such as Volume Cost Filtering or variational optimal methods. We have tested our method on a Pleiades image sequence on various locations with dense urban buildings and report encouraging results with respect to classic multi-label methods such as MIC-MAC, or more recent pipelines such as S2P. Last but not least, our method also produces maps of material parameters on the estimated points, allowing us to simplify building classification or road extraction.

  1. On the Performance of the Martin Digital Filter for High- and Low-pass Applications

    NASA Technical Reports Server (NTRS)

    Mcclain, C. R.

    1979-01-01

    A nonrecursive numerical filter is described in which the weighting sequence is optimized by minimizing the excursion from the ideal rectangular filter in a least squares sense over the entire domain of normalized frequency. Additional corrections to the weights in order to reduce overshoot oscillations (Gibbs phenomenon) and to insure unity gain at zero frequency for the low pass filter are incorporated. The filter is characterized by a zero phase shift for all frequencies (due to a symmetric weighting sequence), a finite memory and stability, and it may readily be transformed to a high pass filter. Equations for the filter weights and the frequency response function are presented, and applications to high and low pass filtering are examined. A discussion of optimization of high pass filter parameters for a rather stringent response requirement is given in an application to the removal of aircraft low frequency oscillations superimposed on remotely sensed ocean surface profiles. Several frequency response functions are displayed, both in normalized frequency space and in period space. A comparison of the performance of the Martin filter with some other commonly used low pass digital filters is provided in an application to oceanographic data.

  2. Estimation of positive semidefinite correlation matrices by using convex quadratic semidefinite programming.

    PubMed

    Fushiki, Tadayoshi

    2009-07-01

    The correlation matrix is a fundamental statistic that is used in many fields. For example, GroupLens, a collaborative filtering system, uses the correlation between users for predictive purposes. Since the correlation is a natural similarity measure between users, the correlation matrix may be used in the Gram matrix in kernel methods. However, the estimated correlation matrix sometimes has a serious defect: although the correlation matrix is originally positive semidefinite, the estimated one may not be positive semidefinite when not all ratings are observed. To obtain a positive semidefinite correlation matrix, the nearest correlation matrix problem has recently been studied in the fields of numerical analysis and optimization. However, statistical properties are not explicitly used in such studies. To obtain a positive semidefinite correlation matrix, we assume the approximate model. By using the model, an estimate is obtained as the optimal point of an optimization problem formulated with information on the variances of the estimated correlation coefficients. The problem is solved by a convex quadratic semidefinite program. A penalized likelihood approach is also examined. The MovieLens data set is used to test our approach.

  3. Application of an Optimal Search Strategy for the DNAPL Source Identification to a Field Site in Nanjing, China

    NASA Astrophysics Data System (ADS)

    Longting, M.; Ye, S.; Wu, J.

    2014-12-01

    Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.

  4. Design Optimization of Vena Cava Filters: An application to dual filtration devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, M A; Wang, S L; Diachin, D P

    Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped modelmore » thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.« less

  5. Designing a Wien Filter Model with General Particle Tracer

    NASA Astrophysics Data System (ADS)

    Mitchell, John; Hofler, Alicia

    2017-09-01

    The Continuous Electron Beam Accelerator Facility injector employs a beamline component called a Wien filter which is typically used to select charged particles of a certain velocity. The Wien filter is also used to rotate the polarization of a beam for parity violation experiments. The Wien filter consists of perpendicular electric and magnetic fields. The electric field changes the spin orientation, but also imposes a transverse kick which is compensated for by the magnetic field. The focus of this project was to create a simulation of the Wien filter using General Particle Tracer. The results from these simulations were vetted against machine data to analyze the accuracy of the Wien model. Due to the close agreement between simulation and experiment, the data suggest that the Wien filter model is accurate. The model allows a user to input either the desired electric or magnetic field of the Wien filter along with the beam energy as parameters, and is able to calculate the perpendicular field strength required to keep the beam on axis. The updated model will aid in future diagnostic tests of any beamline component downstream of the Wien filter, and allow users to easily calculate the electric and magnetic fields needed for the filter to function properly. Funding support provided by DOE Office of Science's Student Undergraduate Laboratory Internship program.

  6. Application of speed-enhanced spatial domain correlation filters for real-time security monitoring

    NASA Astrophysics Data System (ADS)

    Gardezi, Akber; Bangalore, Nagachetan; Al-Kandri, Ahmed; Birch, Philip; Young, Rupert; Chatwin, Chris

    2011-11-01

    A speed enhanced space variant correlation filer which has been designed to be invariant to change in orientation and scale of the target object but also to be spatially variant, i.e. the filter function becoming dependant on local clutter conditions within the image. The speed enhancement of the filter is due to the use of optimization techniques employing low-pass filtering to restrict kernel movement to be within regions of interest. The detection and subsequent identification capability of the two-stage process has been evaluated in highly cluttered backgrounds using both visible and thermal imagery acquired from civil and defense domains along with associated training data sets for target detection and classification. In this paper a series of tests have been conducted in multiple scenarios relating to situations that pose a security threat. Performance matrices comprised of peak-to-correlation energy (PCE) and peak-to-side lobe ratio (PSR) measurements of the correlation output have been calculated to allow the definition of a recognition criterion. The hardware implementation of the system has been discussed in terms of Field Programmable Gate Array (FPGA) chipsets with implementation bottle necks and their solution being considered.

  7. Design of almost symmetric orthogonal wavelet filter bank via direct optimization.

    PubMed

    Murugesan, Selvaraaju; Tay, David B H

    2012-05-01

    It is a well-known fact that (compact-support) dyadic wavelets [based on the two channel filter banks (FBs)] cannot be simultaneously orthogonal and symmetric. Although orthogonal wavelets have the energy preservation property, biorthogonal wavelets are preferred in image processing applications because of their symmetric property. In this paper, a novel method is presented for the design of almost symmetric orthogonal wavelet FB. Orthogonality is structurally imposed by using the unnormalized lattice structure, and this leads to an objective function, which is relatively simple to optimize. The designed filters have good frequency response, flat group delay, almost symmetric filter coefficients, and symmetric wavelet function.

  8. Optimized Beam Sculpting with Generalized Fringe-rate Filters

    NASA Astrophysics Data System (ADS)

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina

    2016-03-01

    We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.

  9. LCTV Holographic Imaging

    NASA Technical Reports Server (NTRS)

    Knopp, Jerome

    1996-01-01

    Astronauts are required to interface with complex systems that require sophisticated displays to communicate effectively. Lightweight, head-mounted real-time displays that present holographic images for comfortable viewing may be the ideal solution. We describe an implementation of a liquid crystal television (LCTV) as a spatial light modulator (SLM) for the display of holograms. The implementation required the solution of a complex set of problems. These include field calculations, determination of the LCTV-SLM complex transmittance characteristics and a precise knowledge of the signal mapping between the LCTV and frame grabbing board that controls it. Realizing the hologram is further complicated by the coupling that occurs between the phase and amplitude in the LCTV transmittance. A single drive signal (a gray level signal from a framegrabber) determines both amplitude and phase. Since they are not independently controllable (as is true in the ideal SLM) one must deal with the problem of optimizing (in some sense) the hologram based on this constraint. Solutions for the above problems have been found. An algorithm has been for field calculations that uses an efficient outer product formulation. Juday's MEDOF 7 (Minimum Euclidean Distance Optimal Filter) algorithm used for originally for filter calculations has been successfully adapted to handle metrics appropriate for holography. This has solved the problem of optimizing the hologram to the constraints imposed by coupling. Two laboratory methods have been developed for determining an accurate mapping of framegrabber pixels to LCTV pixels. A friendly software system has been developed that integrates the hologram calculation and realization process using a simple set of instructions. The computer code and all the laboratory measurement techniques determining SLM parameters have been proven with the production of a high quality test image.

  10. Engineering applications of metaheuristics: an introduction

    NASA Astrophysics Data System (ADS)

    Oliva, Diego; Hinojosa, Salvador; Demeshko, M. V.

    2017-01-01

    Metaheuristic algorithms are important tools that in recent years have been used extensively in several fields. In engineering, there is a big amount of problems that can be solved from an optimization point of view. This paper is an introduction of how metaheuristics can be used to solve complex problems of engineering. Their use produces accurate results in problems that are computationally expensive. Experimental results support the performance obtained by the selected algorithms in such specific problems as digital filter design, image processing and solar cells design.

  11. Removal of Surface-Reflected Light for the Measurement of Remote-Sensing Reflectance from an Above-Surface Platform

    DTIC Science & Technology

    2010-12-01

    remote - sensing reflectance) can be highly inaccurate if a spectrally constant value is applied (although errors can be reduced by carefully filtering measured raw data). To remove surface-reflected light in field measurements of remote sensing reflectance, a spectral optimization approach was applied, with results compared with those from remote sensing models and from direct measurements. The agreement from different determinations suggests that reasonable results for remote sensing reflectance of clear

  12. Removal of Surface-Reflected Light for the Measurement of Remote-Sensing Reflectance from an Above-Surface Platform

    DTIC Science & Technology

    2010-12-06

    remote - sensing reflectance) can be highly inaccurate if a spectrally constant value is applied (although errors can be reduced by carefully filtering measured raw data). To remove surface-reflected light in field measurements of remote sensing reflectance, a spectral optimization approach was applied, with results compared with those from remote sensing models and from direct measurements. The agreement from different determinations suggests that reasonable results for remote sensing reflectance of clear

  13. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  14. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning.

    PubMed

    Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-02-22

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.

  15. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning

    PubMed Central

    Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-01-01

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406

  16. Classifying EEG for Brain-Computer Interface: Learning Optimal Filters for Dynamical System Features

    PubMed Central

    Song, Le; Epps, Julien

    2007-01-01

    Classification of multichannel EEG recordings during motor imagination has been exploited successfully for brain-computer interfaces (BCI). In this paper, we consider EEG signals as the outputs of a networked dynamical system (the cortex), and exploit synchronization features from the dynamical system for classification. Herein, we also propose a new framework for learning optimal filters automatically from the data, by employing a Fisher ratio criterion. Experimental evaluations comparing the proposed dynamical system features with the CSP and the AR features reveal their competitive performance during classification. Results also show the benefits of employing the spatial and the temporal filters optimized using the proposed learning approach. PMID:18364986

  17. Introducing passive acoustic filter in acoustic based condition monitoring: Motor bike piston-bore fault identification

    NASA Astrophysics Data System (ADS)

    Jena, D. P.; Panigrahi, S. N.

    2016-03-01

    Requirement of designing a sophisticated digital band-pass filter in acoustic based condition monitoring has been eliminated by introducing a passive acoustic filter in the present work. So far, no one has attempted to explore the possibility of implementing passive acoustic filters in acoustic based condition monitoring as a pre-conditioner. In order to enhance the acoustic based condition monitoring, a passive acoustic band-pass filter has been designed and deployed. Towards achieving an efficient band-pass acoustic filter, a generalized design methodology has been proposed to design and optimize the desired acoustic filter using multiple filter components in series. An appropriate objective function has been identified for genetic algorithm (GA) based optimization technique with multiple design constraints. In addition, the sturdiness of the proposed method has been demonstrated in designing a band-pass filter by using an n-branch Quincke tube, a high pass filter and multiple Helmholtz resonators. The performance of the designed acoustic band-pass filter has been shown by investigating the piston-bore defect of a motor-bike using engine noise signature. On the introducing a passive acoustic filter in acoustic based condition monitoring reveals the enhancement in machine learning based fault identification practice significantly. This is also a first attempt of its own kind.

  18. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  19. Planar differential mobility spectrometer as a pre-filter for atmospheric pressure ionization mass spectrometry

    PubMed Central

    Schneider, Bradley B.; Covey, Thomas R.; Coy, Stephen L.; Krylov, Evgeny V.

    2010-01-01

    Ion filters based on planar DMS can be integrated with the inlet configuration of most mass spectrometers, and are able to enhance the quality of mass analysis and quantitative accuracy by reducing chemical noise, and by pre-separating ions of similar mass. This paper is the first in a series of three papers describing the optimization of DMS / MS instrumentation. In this paper the important physical parameters of a planar DMS-MS interface including analyzer geometry, analyzer coupling to a mass spectrometer, and transport gas flow control are considered. The goal is to optimize ion transmission and transport efficiency, provide optimal and adjustable resolution, and produce stable operation under conditions of high sample contamination. We discuss the principles of DMS separations and highlight the theoretical underpinnings. The main differences between planar and cylindrical geometries are presented, including a discussion of the advantages and disadvantages of RF ion focusing. In addition, we present a description of optimization of the frequency and amplitude of the DMS fields for resolution and ion transmission, and a discussion of the influence and importance of ion residence time in DMS. We have constructed a mass spectrometer interface for planar geometries that takes advantage of atmospheric pressure gas dynamic principles, rather than ion focusing, to minimize ion losses from diffusion in the analyzer and to maximize total ion transport into the mass spectrometer. A variety of experimental results has been obtained that illustrate the performance of this type of interface, including tests of resistance to high contamination levels, and the separation of stereoisomers. In a subsequent publication the control of the chemical interactions that drive the separation process of a DMS / MS system will be considered. In a third publication we describe novel electronics designed to provide the high voltages asymmetric waveform fields (SV) required for these devices as well as the effects of different waveforms. PMID:21278836

  20. Teaching learning based optimization-functional link artificial neural network filter for mixed noise reduction from magnetic resonance image.

    PubMed

    Kumar, M; Mishra, S K

    2017-01-01

    The clinical magnetic resonance imaging (MRI) images may get corrupted due to the presence of the mixture of different types of noises such as Rician, Gaussian, impulse, etc. Most of the available filtering algorithms are noise specific, linear, and non-adaptive. There is a need to develop a nonlinear adaptive filter that adapts itself according to the requirement and effectively applied for suppression of mixed noise from different MRI images. In view of this, a novel nonlinear neural network based adaptive filter i.e. functional link artificial neural network (FLANN) whose weights are trained by a recently developed derivative free meta-heuristic technique i.e. teaching learning based optimization (TLBO) is proposed and implemented. The performance of the proposed filter is compared with five other adaptive filters and analyzed by considering quantitative metrics and evaluating the nonparametric statistical test. The convergence curve and computational time are also included for investigating the efficiency of the proposed as well as competitive filters. The simulation outcomes of proposed filter outperform the other adaptive filters. The proposed filter can be hybridized with other evolutionary technique and utilized for removing different noise and artifacts from others medical images more competently.

  1. Technical note: optimization for improved tube-loading efficiency in the dual-energy computed tomography coupled with balanced filter method.

    PubMed

    Saito, Masatoshi

    2010-08-01

    This article describes the spectral optimization of dual-energy computed tomography using balanced filters (bf-DECT) to reduce the tube loadings and dose by dedicating to the acquisition of electron density information, which is essential for treatment planning in radiotherapy. For the spectral optimization of bf-DECT, the author calculated the beam-hardening error and air kerma required to achieve a desired noise level in an electron density image of a 50-cm-diameter cylindrical water phantom. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimal combination of tube voltages was 80 kV/140 kV in conjunction with Tb/Hf and Bi/Mo filter pairs; this combination agrees with that obtained in a previous study [M. Saito, "Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method," Med. Phys. 36, 3631-3642 (2009)], although the thicknesses of the filters that yielded a minimum tube output were slightly different from those obtained in the previous study. The resultant tube loading of a low-energy scan of the present bf-DECT significantly decreased from 57.5 to 4.5 times that of a high-energy scan for conventional DECT. Furthermore, the air kerma of bf-DECT could be reduced to less than that of conventional DECT, while obtaining the same figure of merit for the measurement of electron density and effective atomic number. The tube-loading and dose efficiencies of bf-DECT were considerably improved by sacrificing the quality of the noise level in the images of effective atomic number.

  2. Design and experimentally measure a high performance metamaterial filter

    NASA Astrophysics Data System (ADS)

    Xu, Ya-wen; Xu, Jing-cheng

    2018-03-01

    Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.

  3. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  4. Research on spectroscopic imaging. Volume 1: Technical discussion. [birefringent filters

    NASA Technical Reports Server (NTRS)

    Title, A.; Rosenberg, W.

    1979-01-01

    The principals of operation and the capabilities of birefringent filters systems are examined. Topics covered include: Lyot, Solc, and partial polarizer filters; transmission profile management; tuning birefringent filters; field of view; bandpass control; engineering considerations; and recommendations. Improvements for field of view effects, and the development of birefringent filters for spaceflight are discussed in appendices.

  5. Optimally Distributed Kalman Filtering with Data-Driven Communication †

    PubMed Central

    Dormann, Katharina

    2018-01-01

    For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392

  6. The mean field theory in EM procedures for blind Markov random field image restoration.

    PubMed

    Zhang, J

    1993-01-01

    A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.

  7. An optimized Kalman filter for the estimate of trunk orientation from inertial sensors data during treadmill walking.

    PubMed

    Mazzà, Claudia; Donati, Marco; McCamley, John; Picerno, Pietro; Cappozzo, Aurelio

    2012-01-01

    The aim of this study was the fine tuning of a Kalman filter with the intent to provide optimal estimates of lower trunk orientation in the frontal and sagittal planes during treadmill walking at different speeds using measured linear acceleration and angular velocity components represented in a local system of reference. Data were simultaneously collected using both an inertial measurement unit (IMU) and a stereophotogrammetric system from three healthy subjects walking on a treadmill at natural, slow and fast speeds. These data were used to estimate the parameters of the Kalman filter that minimized the difference between the trunk orientations provided by the filter and those obtained through stereophotogrammetry. The optimized parameters were then used to process the data collected from a further 15 healthy subjects of both genders and different anthropometry performing the same walking tasks with the aim of determining the robustness of the filter set up. The filter proved to be very robust. The root mean square values of the differences between the angles estimated through the IMU and through stereophotogrammetry were lower than 1.0° and the correlation coefficients between the corresponding curves were greater than 0.91. The proposed filter design can be used to reliably estimate trunk lateral and frontal bending during walking from inertial sensor data. Further studies are needed to determine the filter parameters that are most suitable for other motor tasks. Copyright © 2011. Published by Elsevier B.V.

  8. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  9. Real-time management of an urban groundwater well field threatened by pollution.

    PubMed

    Bauser, Gero; Franssen, Harrie-Jan Hendricks; Kaiser, Hans-Peter; Kuhlmann, Ulrich; Stauffer, Fritz; Kinzelbach, Wolfgang

    2010-09-01

    We present an optimal real-time control approach for the management of drinking water well fields. The methodology is applied to the Hardhof field in the city of Zurich, Switzerland, which is threatened by diffuse pollution. The risk of attracting pollutants is higher if the pumping rate is increased and can be reduced by increasing artificial recharge (AR) or by adaptive allocation of the AR. The method was first tested in offline simulations with a three-dimensional finite element variably saturated subsurface flow model for the period January 2004-August 2005. The simulations revealed that (1) optimal control results were more effective than the historical control results and (2) the spatial distribution of AR should be different from the historical one. Next, the methodology was extended to a real-time control method based on the Ensemble Kalman Filter method, using 87 online groundwater head measurements, and tested at the site. The real-time control of the well field resulted in a decrease of the electrical conductivity of the water at critical measurement points which indicates a reduced inflow of water originating from contaminated sites. It can be concluded that the simulation and the application confirm the feasibility of the real-time control concept.

  10. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  11. Recent progress in plasmonic colour filters for image sensor and multispectral applications

    NASA Astrophysics Data System (ADS)

    Pinton, Nadia; Grant, James; Choubey, Bhaskar; Cumming, David; Collins, Steve

    2016-04-01

    Using nanostructured thin metal films as colour filters offers several important advantages, in particular high tunability across the entire visible spectrum and some of the infrared region, and also compatibility with conventional CMOS processes. Since 2003, the field of plasmonic colour filters has evolved rapidly and several different designs and materials, or combination of materials, have been proposed and studied. In this paper we present a simulation study for a single- step lithographically patterned multilayer structure able to provide competitive transmission efficiencies above 40% and contemporary FWHM of the order of 30 nm across the visible spectrum. The total thickness of the proposed filters is less than 200 nm and is constant for every wavelength, unlike e.g. resonant cavity-based filters such as Fabry-Perot that require a variable stack of several layers according to the working frequency, and their passband characteristics are entirely controlled by changing the lithographic pattern. It will also be shown that a key to obtaining narrow-band optical response lies in the dielectric environment of a nanostructure and that it is not necessary to have a symmetric structure to ensure good coupling between the SPPs at the top and bottom interfaces. Moreover, an analytical method to evaluate the periodicity, given a specific structure and a desirable working wavelength, will be proposed and its accuracy demonstrated. This method conveniently eliminate the need to optimize the design of a filter numerically, i.e. by running several time-consuming simulations with different periodicities.

  12. Optimizing dual-energy x-ray parameters for the ExacTrac clinical stereoscopic imaging system to enhance soft-tissue imaging.

    PubMed

    Bowman, Wesley A; Robar, James L; Sattarivand, Mike

    2017-03-01

    Stereoscopic x-ray image guided radiotherapy for lung tumors is often hindered by bone overlap and limited soft-tissue contrast. This study aims to evaluate the feasibility of dual-energy imaging techniques and to optimize parameters of the ExacTrac stereoscopic imaging system to enhance soft-tissue imaging for application to lung stereotactic body radiation therapy. Simulated spectra and a physical lung phantom were used to optimize filter material, thickness, tube potentials, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number range (3-83) based on a metric defined to separate spectra of high and low-energies. Both energies used the same filter due to time constraints of imaging in the presence of respiratory motion. The lung phantom contained bone, soft tissue, and tumor mimicking materials, and it was imaged with a filter thickness in the range of (0-0.7) mm and a kVp range of (60-80) for low energy and (120,140) for high energy. Optimal dual-energy weighting factors were obtained when the bone to soft-tissue contrast-to-noise ratio (CNR) was minimized. Optimal filter thickness and tube potential were achieved by maximizing tumor-to-background CNR. Using the optimized parameters, dual-energy images of an anthropomorphic Rando phantom with a spherical tumor mimicking material inserted in his lung were acquired and evaluated for bone subtraction and tumor contrast. Imaging dose was measured using the dual-energy technique with and without beam filtration and matched to that of a clinical conventional single energy technique. Tin was the material of choice for beam filtering providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-weighted image in the lung phantom was obtained using 0.2 mm tin and (140, 60) kVp pair. Dual-energy images of the Rando phantom with the tin filter had noticeable improvement in bone elimination, tumor contrast, and noise content when compared to dual-energy imaging with no filtration. The surface dose was 0.52 mGy per each stereoscopic view for both clinical single energy technique and the dual-energy technique in both cases of with and without the tin filter. Dual-energy soft-tissue imaging is feasible without additional imaging dose using the ExacTrac stereoscopic imaging system with optimized acquisition parameters and no beam filtration. Addition of a single tin filter for both the high and low energies has noticeable improvements on dual-energy imaging with optimized parameters. Clinical implementation of a dual-energy technique on ExacTrac stereoscopic imaging could improve lung tumor visibility. © 2017 American Association of Physicists in Medicine.

  13. Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2009-01-01

    An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.

  14. Dual Adaptive Filtering by Optimal Projection Applied to Filter Muscle Artifacts on EEG and Comparative Study

    PubMed Central

    Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

    2014-01-01

    Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

  15. Optical and mechanical design of the fore-optics of HARMONI

    NASA Astrophysics Data System (ADS)

    Sánchez-Capuchino, J.; Hernández, E.; Bueno, A.; Herreros, J. M.; Thatte, N.; Bryson, I.; Clarke, F.; Tecza, M.

    2014-07-01

    HARMONI is a visible and near-infrared (0.47μm to 2.5μm) integral field spectrometer providing the E-ELT's core spectroscopic capability. It will provide ~32000 simultaneous spectra of a rectangular field of view at four foreseen different spatial sample (spaxel) scales. The HARMONI fore-optics re-formats the native telescope plate scale to suitable values for the downstream instrument optics. This telecentric adaptation includes anamorphic magnification of the plate scale to optimize the performance of the IFU, which contains the image slicer, and their four spectrographs. In addition, it provides an image of the telescope pupil to assemble a cold stop shared among all the scales allowing efficient suppression of the thermal background. A pupil imaging unit also re-images the pupil cold stop onto the image slicer to check the relative alignment between the E-ELT and HARMONI pupils. The scale changer will also host the filter wheel with the long-pass filters to select the wavelength range. The main reasoning specifying the importance of the HARMONI fore-optics and its current optical and mechanical design is described in this contribution.

  16. Deep neural networks to enable real-time multimessenger astrophysics

    NASA Astrophysics Data System (ADS)

    George, Daniel; Huerta, E. A.

    2018-02-01

    Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.

  17. Anti-aliasing Wiener filtering for wave-front reconstruction in the spatial-frequency domain for high-order astronomical adaptive-optics systems.

    PubMed

    Correia, Carlos M; Teixeira, Joel

    2014-12-01

    Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.

  18. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors

    PubMed Central

    Li, Jian; Wei, Xinguo; Zhang, Guangjun

    2017-01-01

    Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684

  19. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.

    PubMed

    Li, Jian; Wei, Xinguo; Zhang, Guangjun

    2017-08-21

    Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.

  20. Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit.

    PubMed

    Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping

    2006-05-29

    Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss.

  1. Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit

    NASA Astrophysics Data System (ADS)

    Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping

    2006-05-01

    Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss.

  2. Analytically solvable chaotic oscillator based on a first-order filter.

    PubMed

    Corron, Ned J; Cooper, Roy M; Blakely, Jonathan N

    2016-02-01

    A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform for any stable infinite-impulse response filter is chaotic.

  3. Analytically solvable chaotic oscillator based on a first-order filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.

    2016-02-15

    A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform formore » any stable infinite-impulse response filter is chaotic.« less

  4. An exact algorithm for optimal MAE stack filter design.

    PubMed

    Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior

    2007-02-01

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.

  5. Design of efficient circularly symmetric two-dimensional variable digital FIR filters.

    PubMed

    Bindima, Thayyil; Elias, Elizabeth

    2016-05-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.

  6. Design of efficient circularly symmetric two-dimensional variable digital FIR filters

    PubMed Central

    Bindima, Thayyil; Elias, Elizabeth

    2016-01-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739

  7. Entropic uncertainty relations in the Heisenberg XXZ model and its controlling via filtering operations

    NASA Astrophysics Data System (ADS)

    Ming, Fei; Wang, Dong; Shi, Wei-Nan; Huang, Ai-Jun; Sun, Wen-Yang; Ye, Liu

    2018-04-01

    The uncertainty principle is recognized as an elementary ingredient of quantum theory and sets up a significant bound to predict outcome of measurement for a couple of incompatible observables. In this work, we develop dynamical features of quantum memory-assisted entropic uncertainty relations (QMA-EUR) in a two-qubit Heisenberg XXZ spin chain with an inhomogeneous magnetic field. We specifically derive the dynamical evolutions of the entropic uncertainty with respect to the measurement in the Heisenberg XXZ model when spin A is initially correlated with quantum memory B. It has been found that the larger coupling strength J of the ferromagnetism ( J < 0 ) and the anti-ferromagnetism ( J > 0 ) chains can effectively degrade the measuring uncertainty. Besides, it turns out that the higher temperature can induce the inflation of the uncertainty because the thermal entanglement becomes relatively weak in this scenario, and there exists a distinct dynamical behavior of the uncertainty when an inhomogeneous magnetic field emerges. With the growing magnetic field | B | , the variation of the entropic uncertainty will be non-monotonic. Meanwhile, we compare several different optimized bounds existing with the initial bound proposed by Berta et al. and consequently conclude Adabi et al.'s result is optimal. Moreover, we also investigate the mixedness of the system of interest, dramatically associated with the uncertainty. Remarkably, we put forward a possible physical interpretation to explain the evolutionary phenomenon of the uncertainty. Finally, we take advantage of a local filtering operation to steer the magnitude of the uncertainty. Therefore, our explorations may shed light on the entropic uncertainty under the Heisenberg XXZ model and hence be of importance to quantum precision measurement over solid state-based quantum information processing.

  8. A cost-effective system for in-situ geological arsenic adsorption from groundwater.

    PubMed

    Shan, Huimei; Ma, Teng; Wang, Yanxin; Zhao, Jie; Han, Hongyin; Deng, Yamin; He, Xin; Dong, Yihui

    2013-11-01

    An effective and low-cost in-situ geological filtration system was developed to treat arsenic-contaminated groundwater in remote rural areas. Hangjinhouqi in western Hetao Plain of Inner Mongolia, China, where groundwater contains a high arsenic concentration, was selected as the study area. Fe-mineral and limestone widely distributed in the study area were used as filter materials. Batch and column experiments as well as field tests were performed to determine optimal filtration parameters and to evaluate the effectiveness of the technology for arsenic removal under different hydrogeochemical conditions. A mixture containing natural Fe-mineral (hematite and goethite) and limestone at a mass ratio of 2:1 was found to be the most effective for arsenic removal. The results indicated that Fe-mineral in the mixture played a major role for arsenic removal. Meanwhile, limestone buffered groundwater pH to be conducive for the optimal arsenic removal. As(III) adsorption and oxidation by iron mineral, and the formation of Ca-As(V) precipitation with Ca contributed from limestone dissolution were likely mechanisms leading to the As removal. Field demonstrations revealed that a geological filter bed filled with the proposed mineral mixture reduced groundwater arsenic concentration from 400 μg/L to below 10 μg/L. The filtration system was continuously operated for a total volume of 365,000L, which is sufficient for drinking water supplying a rural household of 5 persons for 5 years at a rate of 40 L per person per day. © 2013.

  9. The Use of Daily Geodetic UT1 and LOD Data in the Optimal Estimation of UT1 and LOD With the JPL Kalman Earth Orientation Filter

    NASA Technical Reports Server (NTRS)

    Freedman, A. P.; Steppe, J. A.

    1995-01-01

    The Jet Propulsion Laboratory Kalman Earth Orientation Filter (KEOF) uses several of the Earth rotation data sets available to generate optimally interpolated UT1 and LOD series to support spacecraft navigation. This paper compares use of various data sets within KEOF.

  10. Delineating high-density areas in spatial Poisson fields from strip-transect sampling using indicator geostatistics: application to unexploded ordnance removal.

    PubMed

    Saito, Hirotaka; McKenna, Sean A

    2007-07-01

    An approach for delineating high anomaly density areas within a mixture of two or more spatial Poisson fields based on limited sample data collected along strip transects was developed. All sampled anomalies were transformed to anomaly count data and indicator kriging was used to estimate the probability of exceeding a threshold value derived from the cdf of the background homogeneous Poisson field. The threshold value was determined so that the delineation of high-density areas was optimized. Additionally, a low-pass filter was applied to the transect data to enhance such segmentation. Example calculations were completed using a controlled military model site, in which accurate delineation of clusters of unexploded ordnance (UXO) was required for site cleanup.

  11. Optimality based repetitive controller design for track-following servo system of optical disk drives.

    PubMed

    Chen, Wentao; Zhang, Weidong

    2009-10-01

    In an optical disk drive servo system, to attenuate the external periodic disturbances induced by inevitable disk eccentricity, repetitive control has been used successfully. The performance of a repetitive controller greatly depends on the bandwidth of the low-pass filter included in the repetitive controller. However, owing to the plant uncertainty and system stability, it is difficult to maximize the bandwidth of the low-pass filter. In this paper, we propose an optimality based repetitive controller design method for the track-following servo system with norm-bounded uncertainties. By embedding a lead compensator in the repetitive controller, both the system gain at periodic signal's harmonics and the bandwidth of the low-pass filter are greatly increased. The optimal values of the repetitive controller's parameters are obtained by solving two optimization problems. Simulation and experimental results are provided to illustrate the effectiveness of the proposed method.

  12. Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1994-01-01

    Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

  13. Optimizing focal plane electric field estimation for detecting exoplanets

    NASA Astrophysics Data System (ADS)

    Groff, T.; Kasdin, N. J.; Riggs, A. J. E.

    Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.

  14. High resolution quantitative phase imaging of live cells with constrained optimization approach

    NASA Astrophysics Data System (ADS)

    Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu

    2016-03-01

    Quantitative phase imaging (QPI) aims at studying weakly scattering and absorbing biological specimens with subwavelength accuracy without any external staining mechanisms. Use of a reference beam at an angle is one of the necessary criteria for recording of high resolution holograms in most of the interferometric methods used for quantitative phase imaging. The spatial separation of the dc and twin images is decided by the reference beam angle and Fourier-filtered reconstructed image will have a very poor resolution if hologram is recorded below a minimum reference angle condition. However, it is always inconvenient to have a large reference beam angle while performing high resolution microscopy of live cells and biological specimens with nanometric features. In this paper, we treat reconstruction of digital holographic microscopy images as a constrained optimization problem with smoothness constraint in order to recover only complex object field in hologram plane even with overlapping dc and twin image terms. We solve this optimization problem by gradient descent approach iteratively and the smoothness constraint is implemented by spatial averaging with appropriate size. This approach will give excellent high resolution image recovery compared to Fourier filtering while keeping a very small reference angle. We demonstrate this approach on digital holographic microscopy of live cells by recovering the quantitative phase of live cells from a hologram recorded with nearly zero reference angle.

  15. Contrast-enhanced digital mammography (CEDM): imaging modeling, computer simulations, and phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Jing, Zhenxue; Smith, Andrew

    2005-04-01

    Contrast enhanced digital mammography (CEDM), which is based upon the analysis of a series of x-ray projection images acquired before/after the administration of contrast agents, may provide physicians critical physiologic and morphologic information of breast lesions to determine the malignancy of lesions. This paper proposes to combine the kinetic analysis (KA) of contrast agent uptake/washout process and the dual-energy (DE) contrast enhancement together to formulate a hybrid contrast enhanced breast-imaging framework. The quantitative characteristics of materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filter, breast tissues/lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systematically modeled. The contrast-noise-ration (CNR) of iodinated lesions and mean absorbed glandular dose were estimated mathematically. The x-ray techniques optimization was conducted through a series of computer simulations to find the optimal tube voltage, filter thickness, and exposure levels for various breast thicknesses, breast density, and detectable contrast agent concentration levels in terms of detection efficiency (CNR2/dose). A phantom study was performed on a modified Selenia full field digital mammography system to verify the simulated results. The dose level was comparable to the dose in diagnostic mode (less than 4 mGy for an average 4.2 cm compressed breast). The results from the computer simulations and phantom study are being used to optimize an ongoing clinical study.

  16. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.

  17. Optimal Matched Filter in the Low-number Count Poisson Noise Regime and Implications for X-Ray Source Detection

    NASA Astrophysics Data System (ADS)

    Ofek, Eran O.; Zackay, Barak

    2018-04-01

    Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.

  18. Precision of proportion estimation with binary compressed Raman spectrum.

    PubMed

    Réfrégier, Philippe; Scotté, Camille; de Aguiar, Hilton B; Rigneault, Hervé; Galland, Frédéric

    2018-01-01

    The precision of proportion estimation with binary filtering of a Raman spectrum mixture is analyzed when the number of binary filters is equal to the number of present species and when the measurements are corrupted with Poisson photon noise. It is shown that the Cramer-Rao bound provides a useful methodology to analyze the performance of such an approach, in particular when the binary filters are orthogonal. It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer-Rao bound). Evolutions of the Cramer-Rao bound are analyzed when the measuring times are optimized or when the considered proportion for binary filter synthesis is not optimized. Two strategies for the appropriate choice of this considered proportion are also analyzed for the binary filter synthesis.

  19. Nonlinear optimal filter technique for analyzing energy depositions in TES sensors driven into saturation

    DOE PAGES

    Shank, B.; Yen, J. J.; Cabrera, B.; ...

    2014-11-04

    We present a detailed thermal and electrical model of superconducting transition edge sensors (TESs) connected to quasiparticle (qp) traps, such as the W TESs connected to Al qp traps used for CDMS (Cryogenic Dark Matter Search) Ge and Si detectors. We show that this improved model, together with a straightforward time-domain optimal filter, can be used to analyze pulses well into the nonlinear saturation region and reconstruct absorbed energies with optimal energy resolution.

  20. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  1. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine

    PubMed Central

    Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  2. Lightweight Filter Architecture for Energy Efficient Mobile Vehicle Localization Based on a Distributed Acoustic Sensor Network

    PubMed Central

    Kim, Keonwook

    2013-01-01

    The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably. PMID:23979482

  3. Observations of vector magnetic fields with a magneto-optic filter

    NASA Technical Reports Server (NTRS)

    Cacciani, Alessandro; Varsik, John; Zirin, Harold

    1990-01-01

    The use of the magnetooptic filter to observe solar magnetic fields in the potassium line at 7699 A is described. The filter has been used in the Big Bear videomagnetograph since October 23. It gives a high sensitivity and dynamic range for longitudnal magnetic fields and enables measurement of transverse magnetic fields using the sigma component. Examples of the observations are presented.

  4. Design and fabrication of cascaded dichromate gelatin holographic filters for spectrum-splitting PV systems

    NASA Astrophysics Data System (ADS)

    Wu, Yuechen; Chrysler, Benjamin; Kostuk, Raymond K.

    2018-01-01

    The technique of designing, optimizing, and fabricating broadband volume transmission holograms using dichromate gelatin (DCG) is summarized for solar spectrum-splitting applications. The spectrum-splitting photovoltaic (PV) system uses a series of single-bandgap PV cells that have different spectral conversion efficiency properties to more fully utilize the solar spectrum. In such a system, one or more high-performance optical filters are usually required to split the solar spectrum and efficiently send them to the corresponding PV cells. An ideal spectral filter should have a rectangular shape with sharp transition wavelengths. A methodology of designing and modeling a transmission DCG hologram using coupled wave analysis for different PV bandgap combinations is described. To achieve a broad diffraction bandwidth and sharp cutoff wavelength, a cascaded structure of multiple thick holograms is described. A search algorithm is then developed to optimize both single- and two-layer cascaded holographic spectrum-splitting elements for the best bandgap combinations of two- and three-junction spectrum-splitting photovoltaic (SSPV) systems illuminated under the AM1.5 solar spectrum. The power conversion efficiencies of the optimized systems are found to be 42.56% and 48.41%, respectively, using the detailed balance method, and show an improvement compared with a tandem multijunction system. A fabrication method for cascaded DCG holographic filters is also described and used to prototype the optimized filter for the three-junction SSPV system.

  5. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  6. Correlation of Electric Field and Critical Design Parameters for Ferroelectric Tunable Microwave Filters

    NASA Technical Reports Server (NTRS)

    Subramanyam, Guru; VanKeuls, Fred W.; Miranda, Felix A.; Canedy, Chadwick L.; Aggarwal, Sanjeev; Venkatesan, Thirumalai; Ramesh, Ramamoorthy

    2000-01-01

    The correlation of electric field and critical design parameters such as the insertion loss, frequency ability return loss, and bandwidth of conductor/ferroelectric/dielectric microstrip tunable K-band microwave filters is discussed in this work. This work is based primarily on barium strontium titanate (BSTO) ferroelectric thin film based tunable microstrip filters for room temperature applications. Two new parameters which we believe will simplify the evaluation of ferroelectric thin films for tunable microwave filters, are defined. The first of these, called the sensitivity parameter, is defined as the incremental change in center frequency with incremental change in maximum applied electric field (EPEAK) in the filter. The other, the loss parameter, is defined as the incremental or decremental change in insertion loss of the filter with incremental change in maximum applied electric field. At room temperature, the Au/BSTO/LAO microstrip filters exhibited a sensitivity parameter value between 15 and 5 MHz/cm/kV. The loss parameter varied for different bias configurations used for electrically tuning the filter. The loss parameter varied from 0.05 to 0.01 dB/cm/kV at room temperature.

  7. Alternative methods to smooth the Earth's gravity field

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1981-01-01

    Convolutions on the sphere with corresponding convolution theorems are developed for one and two dimensional functions. Some of these results are used in a study of isotropic smoothing operators or filters. Well known filters in Fourier spectral analysis, such as the rectangular, Gaussian, and Hanning filters, are adapted for data on a sphere. The low-pass filter most often used on gravity data is the rectangular (or Pellinen) filter. However, its spectrum has relatively large sidelobes; and therefore, this filter passes a considerable part of the upper end of the gravity spectrum. The spherical adaptations of the Gaussian and Hanning filters are more efficient in suppressing the high-frequency components of the gravity field since their frequency response functions are strongly field since their frequency response functions are strongly tapered at the high frequencies with no, or small, sidelobes. Formulas are given for practical implementation of these new filters.

  8. Spectral and Wavefront Error Performance of WFIRST/AFTA Prototype Filters

    NASA Technical Reports Server (NTRS)

    Quijada, Manuel; Seide, Laurie; Marx, Cathy; Pasquale, Bert; McMann, Joseph; Hagopian, John; Dominguez, Margaret; Gong, Qian; Morey, Peter

    2016-01-01

    The Cycle 5 design baseline for the Wide-Field Infrared Survey Telescope Astrophysics Focused Telescope Assets (WFIRSTAFTA) instrument includes a single wide-field channel (WFC) instrument for both imaging and slit-less spectroscopy. The only routinely moving part during scientific observations for this wide-field channel is the element wheel (EW) assembly. This filter-wheel assembly will have 8 positions that will be populated with 6 bandpass filters, a blank position, and a Grism that will consist of a three-element assembly to disperse the full field with an undeviated central wavelength for galaxy redshift surveys. All filter elements in the EW assembly will be made out of fused silica substrates (110 mm diameter) that will have the appropriate bandpass coatings according to the filter designations (Z087, Y106, J129, H158, F184, W149 and Grism). This paper presents and discusses the performance (including spectral transmission and reflectedtransmitted wavefront error measurements) of a subset of bandpass filter coating prototypes that are based on the WFC instrument filter compliment. The bandpass coating prototypes that are tested in this effort correspond to the Z087, W149, and Grism filter elements. These filter coatings have been procured from three different vendors to assess the most challenging aspects in terms of the in-band throughput, out of band rejection (including the cut-on and cutoff slopes), and the impact the wavefront error distortions of these filter coatings will have on the imaging performance of the de-field channel in the WFIRSTAFTA observatory.

  9. Effect of protective filters on fire fighter respiratory health: field validation during prescribed burns.

    PubMed

    De Vos, Annemarie J B M; Cook, Angus; Devine, Brian; Thompson, Philip J; Weinstein, Philip

    2009-01-01

    Bushfire smoke contains a range of air toxics. To prevent inhalation of these toxics, fire fighters use respiratory equipment. Yet, little is known about the effectiveness of the equipment on the fire ground. Experimental trials in a smoke chamber demonstrated that, the particulate/organic vapor/formaldehyde (POVF) filter performed best under simulated conditions. This article reports on the field validation trials during prescribed burns in Western Australia. Sixty-seven career fire fighters from the Fire and Emergency Services Authority of Western Australia were allocated one of the three types of filters. Spirometry, oximetry, self-reported symptom, and personal air sampling data were collected before, during and after exposure to bushfire smoke from prescribed burns. Declines in FEV(1) and SaO(2) were demonstrated after 60 and 120 min exposure. A significant higher number of participants in the P filter group reported increases in respiratory symptoms after the exposure. Air sampling inside the respirators demonstrated formaldehyde levels significantly higher in the P filter group compared to the POV and the POVF filter group. The field validation trials during prescribed burns supported the findings from the controlled exposure trials in the smoke chamber. Testing the effectiveness of three types of different filters under bushfire smoke conditions in the field for up to 2 hr demonstrated that the P filter is ineffective in filtering out respiratory irritants. The performance of the POV and the POVF filter appears to be equally effective after 2 hr bushfire smoke exposure in the field.

  10. Design of a composite filter realizable on practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Ramakrishnan, Ramachandran

    1994-01-01

    Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.

  11. Demonstration of differential phase-shift keying demodulation at 10 Gbit/s optimal fiber Bragg grating filters.

    PubMed

    Gatti, Davide; Galzerano, Gianluca; Laporta, Paolo; Longhi, Stefano; Janner, Davide; Guglierame, Andrea; Belmonte, Michele

    2008-07-01

    Optimal demodulation of differential phase-shift keying signals at 10 Gbit/s is experimentally demonstrated using a specially designed structured fiber Bragg grating composed by Fabry-Perot coupled cavities. Bit-error-rate measurements show that, as compared with a conventional Gaussian-shaped filter, our demodulator gives approximately 2.8 dB performance improvement.

  12. Optimal causal inference: estimating stored information and approximating causal architecture.

    PubMed

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

  13. Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1992-01-01

    Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

  14. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.

    PubMed

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-16

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  15. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution

    NASA Astrophysics Data System (ADS)

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-01

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  16. Fixed-frequency and Frequency-agile (au, HTS) Microstrip Bandstop Filters for L-band Applications

    NASA Technical Reports Server (NTRS)

    Saenz, Eileen M.; Subramanyam, Guru; VanKeuls, Fred W.; Chen, Chonglin; Miranda, Felix A.

    2001-01-01

    In this work, we report on the performance of a highly selective, compact 1.83 x 2.08 cm(exp 2) (approx. 0.72 x 0.82 in(exp 2) microstrip line bandstop filter of YBa2CU3O(7-delta) (YBCO) on LaAlO3 (LAO) substrate. The filter is designed for a center frequency of 1.623 GHz for a bandwidth at 3 dB from reference baseline of less than 5.15 MHz, and a bandstop rejection of 30 dB or better. The design and optimization of the filter was performed using Zeland's IE3D circuit simulator. The optimized design was used to fabricate gold (Au) and High-Temperature Superconductor (HTS) versions of the filter. We have also studied an electronically tunable version of the same filter. Tunability of the bandstop characteristics is achieved by the integration of a thin film conductor (Au or HTS) and the nonlinear dielectric ferroelectric SrTiO3 in a conductor/ferroelectric/dielectric modified microstrip configuration. The performance of these filters and comparison with the simulated data will be presented.

  17. Approximation of optimal filter for Ornstein-Uhlenbeck process with quantised discrete-time observation

    NASA Astrophysics Data System (ADS)

    Bania, Piotr; Baranowski, Jerzy

    2018-02-01

    Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.

  18. Optimizing spatial patterns with sparse filter bands for motor-imagery based brain-computer interface.

    PubMed

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2015-11-30

    Common spatial pattern (CSP) has been most popularly applied to motor-imagery (MI) feature extraction for classification in brain-computer interface (BCI) application. Successful application of CSP depends on the filter band selection to a large degree. However, the most proper band is typically subject-specific and can hardly be determined manually. This study proposes a sparse filter band common spatial pattern (SFBCSP) for optimizing the spatial patterns. SFBCSP estimates CSP features on multiple signals that are filtered from raw EEG data at a set of overlapping bands. The filter bands that result in significant CSP features are then selected in a supervised way by exploiting sparse regression. A support vector machine (SVM) is implemented on the selected features for MI classification. Two public EEG datasets (BCI Competition III dataset IVa and BCI Competition IV IIb) are used to validate the proposed SFBCSP method. Experimental results demonstrate that SFBCSP help improve the classification performance of MI. The optimized spatial patterns by SFBCSP give overall better MI classification accuracy in comparison with several competing methods. The proposed SFBCSP is a potential method for improving the performance of MI-based BCI. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  20. Progress on applications of high temperature superconducting microwave filters

    NASA Astrophysics Data System (ADS)

    Chunguang, Li; Xu, Wang; Jia, Wang; Liang, Sun; Yusheng, He

    2017-07-01

    In the past two decades, various kinds of high performance high temperature superconducting (HTS) filters have been constructed and the HTS filters and their front-end subsystems have been successfully applied in many fields. The HTS filters with small insertion loss, narrow bandwidth, flat in-band group delay, deep out-of-band rejection, and steep skirt slope are reviewed. Novel HTS filter design technologies, including those in high power handling filters, multiband filters and frequency tunable filters, are reviewed, as well as the all-HTS integrated front-end receivers. The successful applications to various civilian fields, such as mobile communication, radar, deep space detection, and satellite technology, are also reviewed.

  1. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.

  2. Eccentric correction for off-axis vision in central visual field loss.

    PubMed

    Gustafsson, Jörgen; Unsbo, Peter

    2003-07-01

    Subjects with absolute central visual field loss use eccentric fixation and magnifying devices to utilize their residual vision. This preliminary study investigated the importance of an accurate eccentric correction of off-axis refractive errors to optimize the residual visual function for these subjects. Photorefraction using the PowerRefractor instrument was used to evaluate the ametropia in eccentric fixation angles. Methods were adapted for measuring visual acuity outside the macula using filtered optotypes from high-pass resolution perimetry. Optical corrections were implemented, and the visual function of subjects with central visual field loss was measured with and without eccentric correction. Of the seven cases reported, five experienced an improvement in visual function in their preferred retinal locus with eccentric refraction. The main result was that optical correction for better image quality on the peripheral retina is important for the vision of subjects with central visual field loss, objectively as well as subjectively.

  3. Microwave platform as a valuable tool for characterization of nanophotonic devices

    PubMed Central

    Shishkin, Ivan; Baranov, Dmitry; Slobozhanyuk, Alexey; Filonov, Dmitry; Lukashenko, Stanislav; Samusev, Anton; Belov, Pavel

    2016-01-01

    The rich potential of the microwave experiments for characterization and optimization of optical devices is discussed. While the control of the light fields together with their spatial mapping at the nanoscale is still laborious and not always clear, the microwave setup allows to measure both amplitude and phase of initially determined magnetic and electric field components without significant perturbation of the near-field. As an example, the electromagnetic properties of an add-drop filter, which became a well-known workhorse of the photonics, is experimentally studied with the aid of transmission spectroscopy measurements in optical and microwave ranges and through direct mapping of the near fields at microwave frequencies. We demonstrate that the microwave experiments provide a unique platform for the comprehensive studies of electromagnetic properties of micro- and nanophotonic devices, and allow to obtain data which are hardly acquirable by conventional optical methods. PMID:27759058

  4. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  5. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  6. Fourier Spectral Filter Array for Optimal Multispectral Imaging.

    PubMed

    Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo

    2016-04-01

    Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.

  7. EDITORIAL: Focus on Quantum Control

    NASA Astrophysics Data System (ADS)

    Rabitz, Herschel

    2009-10-01

    Control of quantum phenomena has grown from a dream to a burgeoning field encompassing wide-ranging experimental and theoretical activities. Theoretical research in this area primarily concerns identification of the principles for controlling quantum phenomena, the exploration of new experimental applications and the development of associated operational algorithms to guide such experiments. Recent experiments with adaptive feedback control span many applications including selective excitation, wave packet engineering and control in the presence of complex environments. Practical procedures are also being developed to execute real-time feedback control considering the resultant back action on the quantum system. This focus issue includes papers covering many of the latest advances in the field. Focus on Quantum Control Contents Control of quantum phenomena: past, present and future Constantin Brif, Raj Chakrabarti and Herschel Rabitz Biologically inspired molecular machines driven by light. Optimal control of a unidirectional rotor Guillermo Pérez-Hernández, Adam Pelzer, Leticia González and Tamar Seideman Simulating quantum search algorithm using vibronic states of I2 manipulated by optimally designed gate pulses Yukiyoshi Ohtsuki Efficient coherent control by sequences of pulses of finite duration Götz S Uhrig and Stefano Pasini Control by decoherence: weak field control of an excited state objective Gil Katz, Mark A Ratner and Ronnie Kosloff Multi-qubit compensation sequences Y Tomita, J T Merrill and K R Brown Environment-invariant measure of distance between evolutions of an open quantum system Matthew D Grace, Jason Dominy, Robert L Kosut, Constantin Brif and Herschel Rabitz Simplified quantum process tomography M P A Branderhorst, J Nunn, I A Walmsley and R L Kosut Achieving 'perfect' molecular discrimination via coherent control and stimulated emission Stephen D Clow, Uvo C Holscher and Thomas C Weinacht A convenient method to simulate and visually represent two-photon power spectra of arbitrarily and adaptively shaped broadband laser pulses M A Montgomery and N H Damrauer Accurate and efficient implementation of the von Neumann representation for laser pulses with discrete and finite spectra Frank Dimler, Susanne Fechner, Alexander Rodenberg, Tobias Brixner and David J Tannor Coherent strong-field control of multiple states by a single chirped femtosecond laser pulse M Krug, T Bayer, M Wollenhaupt, C Sarpe-Tudoran, T Baumert, S S Ivanov and N V Vitanov Quantum-state measurement of ionic Rydberg wavepackets X Zhang and R R Jones On the paradigm of coherent control: the phase-dependent light-matter interaction in the shaping window Tiago Buckup, Jurgen Hauer and Marcus Motzkus Use of the spatial phase of a focused laser beam to yield mechanistic information about photo-induced chemical reactions V J Barge, Z Hu and R J Gordon Coherent control of multiple vibrational excitations for optimal detection S D McGrane, R J Scharff, M Greenfield and D S Moore Mode selectivity with polarization shaping in the mid-IR David B Strasfeld, Chris T Middleton and Martin T Zanni Laser-guided relativistic quantum dynamics Chengpu Liu, Markus C Kohler, Karen Z Hatsagortsyan, Carsten Muller and Christoph H Keitel Continuous quantum error correction as classical hybrid control Hideo Mabuchi Quantum filter reduction for measurement-feedback control via unsupervised manifold learning Anne E B Nielsen, Asa S Hopkins and Hideo Mabuchi Control of the temporal profile of the local electromagnetic field near metallic nanostructures Ilya Grigorenko and Anatoly Efimov Laser-assisted molecular orientation in gaseous media: new possibilities and applications Dmitry V Zhdanov and Victor N Zadkov Optimization of laser field-free orientation of a state-selected NO molecular sample Arnaud Rouzee, Arjan Gijsbertsen, Omair Ghafur, Ofer M Shir, Thomas Back, Steven Stolte and Marc J J Vrakking Controlling the sense of molecular rotation Sharly Fleischer, Yuri Khodorkovsky, Yehiam Prior and Ilya Sh Averbukh Optimal control of interacting particles: a multi-configuration time-dependent Hartree-Fock approach Michael Mundt and David J Tannor Exact quantum dissipative dynamics under external time-dependent driving fields Jian Xu, Rui-Xue Xu and Yi Jing Yan Pulse trains in molecular dynamics and coherent spectroscopy: a theoretical study J Voll and R de Vivie-Riedle Quantum control of electron localization in molecules driven by trains of half-cycle pulses Emil Persson, Joachim Burgdorfer and Stefanie Grafe Quantum control design by Lyapunov trajectory tracking for dipole and polarizability coupling Jean-Michel Coron, Andreea Grigoriu, Catalin Lefter and Gabriel Turinici Sliding mode control of quantum systems Daoyi Dong and Ian R Petersen Implementation of fault-tolerant quantum logic gates via optimal control R Nigmatullin and S G Schirmer Generalized filtering of laser fields in optimal control theory: application to symmetry filtering of quantum gate operations Markus Schroder and Alex Brown

  8. EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter

    PubMed Central

    Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.

    2012-01-01

    A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018

  9. Designing manufacturable filters for a 16-band plenoptic camera using differential evolution

    NASA Astrophysics Data System (ADS)

    Doster, Timothy; Olson, Colin C.; Fleet, Erin; Yetzbacher, Michael; Kanaev, Andrey; Lebow, Paul; Leathers, Robert

    2017-05-01

    A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.

  10. Automatic x-ray image contrast enhancement based on parameter auto-optimization.

    PubMed

    Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan

    2017-11-01

    Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more accurate treatment setup and facilitating the subsequent offline review and verification. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  11. Adaptive UAV Attitude Estimation Employing Unscented Kalman Filter, FOAM and Low-Cost MEMS Sensors

    PubMed Central

    de Marina, Héctor García; Espinosa, Felipe; Santos, Carlos

    2012-01-01

    Navigation employing low cost MicroElectroMechanical Systems (MEMS) sensors in Unmanned Aerial Vehicles (UAVs) is an uprising challenge. One important part of this navigation is the right estimation of the attitude angles. Most of the existent algorithms handle the sensor readings in a fixed way, leading to large errors in different mission stages like take-off aerobatic maneuvers. This paper presents an adaptive method to estimate these angles using off-the-shelf components. This paper introduces an Attitude Heading Reference System (AHRS) based on the Unscented Kalman Filter (UKF) using the Fast Optimal Attitude Matrix (FOAM) algorithm as the observation model. The performance of the method is assessed through simulations. Moreover, field experiments are presented using a real fixed-wing UAV. The proposed low cost solution, implemented in a microcontroller, shows a satisfactory real time performance. PMID:23012559

  12. Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting

    PubMed Central

    Ming-jun, Deng; Shi-ru, Qu

    2015-01-01

    Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting. PMID:26779258

  13. An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.

    PubMed

    Khanian, Maryam; Feizi, Awat; Davari, Ali

    2014-01-01

    Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.

  14. Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting.

    PubMed

    Deng, Ming-jun; Qu, Shi-ru

    2015-01-01

    Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting.

  15. Optimality problem of network topology in stocks market analysis

    NASA Astrophysics Data System (ADS)

    Djauhari, Maman Abdurachman; Gan, Siew Lee

    2015-02-01

    Since its introduction fifteen years ago, minimal spanning tree has become an indispensible tool in econophysics. It is to filter the important economic information contained in a complex system of financial markets' commodities. Here we show that, in general, that tool is not optimal in terms of topological properties. Consequently, the economic interpretation of the filtered information might be misleading. To overcome that non-optimality problem, a set of criteria and a selection procedure of an optimal minimal spanning tree will be developed. By using New York Stock Exchange data, the advantages of the proposed method will be illustrated in terms of the power-law of degree distribution.

  16. A Novel Kalman Filter for Human Motion Tracking With an Inertial-Based Dynamic Inclinometer.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo M

    2015-08-01

    Design and development of a linear Kalman filter to create an inertial-based inclinometer targeted to dynamic conditions of motion. The estimation of the body attitude (i.e., the inclination with respect to the vertical) was treated as a source separation problem to discriminate the gravity and the body acceleration from the specific force measured by a triaxial accelerometer. The sensor fusion between triaxial gyroscope and triaxial accelerometer data was performed using a linear Kalman filter. Wrist-worn inertial measurement unit data from ten participants were acquired while performing two dynamic tasks: 60-s sequence of seven manual activities and 90 s of walking at natural speed. Stereophotogrammetric data were used as a reference. A statistical analysis was performed to assess the significance of the accuracy improvement over state-of-the-art approaches. The proposed method achieved, on an average, a root mean square attitude error of 3.6° and 1.8° in manual activities and locomotion tasks (respectively). The statistical analysis showed that, when compared to few competing methods, the proposed method improved the attitude estimation accuracy. A novel Kalman filter for inertial-based attitude estimation was presented in this study. A significant accuracy improvement was achieved over state-of-the-art approaches, due to a filter design that better matched the basic optimality assumptions of Kalman filtering. Human motion tracking is the main application field of the proposed method. Accurately discriminating the two components present in the triaxial accelerometer signal is well suited for studying both the rotational and the linear body kinematics.

  17. Active slag filters-simple and sustainable phosphorus removal from wastewater using steel industry byproduct.

    PubMed

    Pratt, C; Shilton, A

    2010-01-01

    Active filtration, where effluent is passed through a reactive substrate such as steel slag, offers a simple and cost-effective option for removing phosphorus (P) from effluent. This work summarises a series of studies that focused on the world's only full-scale active slag filter operated through to exhaustion. The filter achieved 75% P-removal during its first 5 years, reaching a retention capacity of 1.23 g P/kg slag but then its performance sharply declined. Scanning electron microscopy, X-ray diffraction, X-ray fluorescence, and chemical extractions revealed that P sequestration was primarily achieved via adsorption onto iron (Fe) oxyhydroxides on the slag's surface. It was concluded that batch equilibrium tests, whose use has been repeatedly proposed in the literature, cannot be used as an accurate predictor of filter adsorption capacity because Fe oxyhydroxides form via chemical weathering in the field, and laboratory tests don't account for this. Research into how chemical conditions affect slag's P retention capacity demonstrated that near-neutral pH and high redox are optimal for Fe oxyhydroxide stability and overall filter performance. However, as Fe oxyhydroxide sites fill up, removal capacity becomes exhausted. Attempts to regenerate P removal efficiency using physical techniques proved ineffective contrary to dogma in the literature. Based on the newly-developed understanding of the mechanisms of P removal, chemical regeneration techniques were investigated and were shown to strip large quantities of P from filter adsorption sites leading to a regenerated P removal efficiency. This raises the prospect of developing a breakthrough technology that can repeatedly remove and recover P from effluent.

  18. Control of the Low-energy X-rays by Using MCNP5 and Numerical Analysis for a New Concept Intra-oral X-ray Imaging System

    NASA Astrophysics Data System (ADS)

    Huh, Jangyong; Ji, Yunseo; Lee, Rena

    2018-05-01

    An X-ray control algorithm to modulate the X-ray intensity distribution over the FOV (field of view) has been developed by using numerical analysis and MCNP5, a particle transport simulation code on the basis of the Monte Carlo method. X-rays, which are widely used in medical diagnostic imaging, should be controlled in order to maximize the performance of the X-ray imaging system. However, transporting X-rays, like a liquid or a gas is conveyed through a physical form such as pipes, is not possible. In the present study, an X-ray control algorithm and technique to uniformize the Xray intensity projected on the image sensor were developed using a flattening filter and a collimator in order to alleviate the anisotropy of the distribution of X-rays due to intrinsic features of the X-ray generator. The proposed method, which is combined with MCNP5 modeling and numerical analysis, aimed to optimize a flattening filter and a collimator for a uniform distribution of X-rays. Their size and shape were estimated from the method. The simulation and the experimental results both showed that the method yielded an intensity distribution over an X-ray field of 6×4 cm2 at SID (source to image-receptor distance) of 5 cm with a uniformity of more than 90% when the flattening filter and the collimator were mounted on the system. The proposed algorithm and technique are not only confined to flattening filter development but can also be applied for other X-ray related research and development efforts.

  19. Spectral and Wavefront Error Performance of WFIRST-AFTA Bandpass Filter Coating Prototypes

    NASA Technical Reports Server (NTRS)

    Quijada, Manuel A.; Seide, Laurie; Pasquale, Bert A.; McMann, Joseph C.; Hagopian, John G.; Dominguez, Margaret Z.; Gong, Quian; Marx, Catherine T.

    2016-01-01

    The Cycle 5 design baseline for the Wide-Field Infrared Survey Telescope Astrophysics Focused Telescope Assets (WFIRST/AFTA) instrument includes a single wide-field channel (WFC) instrument for both imaging and slit-less spectroscopy. The only routinely moving part during scientific observations for this wide-field channel is the element wheel (EW) assembly. This filter-wheel assembly will have 8 positions that will be populated with 6 bandpass filters, a blank position, and a Grism that will consist of a three-element assembly to disperse the full field with an undeviated central wavelength for galaxy redshift surveys. All filter elements in the EW assembly will be made out of fused silica substrates (110 mm diameter) that will have the appropriate bandpass coatings according to the filter designations (Z087, Y106, J129, H158, F184, W149 and Grism). This paper presents and discusses the performance (including spectral transmission and reflected/transmitted wavefront error measurements) of a subset of bandpass filter coating prototypes that are based on the WFC instrument filter compliment. The bandpass coating prototypes that are tested in this effort correspond to the Z087, W149, and Grism filter elements. These filter coatings have been procured from three different vendors to assess the most challenging aspects in terms of the in-band throughput, out of band rejection (including the cut-on and cutoff slopes), and the impact the wavefront error distortions of these filter coatings will have on the imaging performance of the wide-field channel in the WFIRST/AFTA observatory.

  20. Sorption and desorption of arsenic to ferrihydrite in a sand filter.

    PubMed

    Jessen, Soren; Larsen, Flemming; Koch, Christian Bender; Arvin, Erik

    2005-10-15

    Elevated arsenic concentrations in drinking water occur in many places around the world. Arsenic is deleterious to humans, and consequently, As water treatment techniques are sought. To optimize arsenic removal, sorption and desorption processes were studied at a drinking water treatment plant with aeration and sand filtration of ferrous iron rich groundwater at Elmevej Water Works, Fensmark, Denmark. Filter sand and pore water were sampled along depth profiles in the filters. The sand was coated with a 100-300 microm thick layer of porous Si-Ca-As-contaning iron oxide (As/Fe = 0.17) with locally some manganese oxide. The iron oxide was identified as a Si-stabilized abiotically formed two-line ferrihydrite with a magnetic hyperfine field of 45.8 T at 5 K. The raw water has an As concentration of 25 microg/L, predominantly as As(II). As the water passes through the filters, As(III) is oxidized to As(V) and the total concentrations drop asymptotically to a approximately 15 microg/L equilibrium concentration. Mn is released to the pore water, indicating the existence of reactive manganese oxides within the oxide coating, which probably play a role for the rapid As(III) oxidation. The As removal in the sand filters appears controlled by sorption equilibrium onto the ferrihydrite. By addition of ferrous chloride (3.65 mg of Fe(II)/L) to the water stream between two serially connected filters, a 3 microg/L As concentration is created in the water that infiltrates into the second sand filter. However, as water flow is reestablished through the second filter, As desorbs from the ferrihydrite and increases until the 15 microg/L equilibrium concentration. Sequential chemical extractions and geometrical estimates of the fraction of surface-associated As suggest that up to 40% of the total As can be remobilized in response to changes in the water chemistry in the sand filter.

  1. Ultranarrow bandwidth spectral filtering for long-range free-space quantum key distribution at daytime.

    PubMed

    Höckel, David; Koch, Lars; Martin, Eugen; Benson, Oliver

    2009-10-15

    We describe a Fabry-Perot-based spectral filter for free-space quantum key distribution (QKD). A multipass etalon filter was built, and its performance was studied. The whole filter setup was carefully optimized to add less than 2 dB attenuation to a signal beam but block stray light by 21 dB. Simulations show that such a filter might be sufficient to allow QKD satellite downlinks during daytime with the current technology.

  2. Principal Component Noise Filtering for NAST-I Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Smith, William L., Sr.

    2011-01-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.

  3. A design aid for sizing filter strips using buffer area ratio

    Treesearch

    M.G. Dosskey; M.J. Helmers; D.E. Eisenhauer

    2011-01-01

    Nonuniform field runoff can reduce the effectiveness of filter strips that are a uniform size along a field margin. Effectiveness can be improved by placing more filter strip where the runoff load is greater and less where the load is smaller. A modeling analysis was conducted of the relationship between pollutant trapping efficiency and the ratio of filter strip area...

  4. Hemispherical-field-of-view, nonimaging narrow-band spectral filter

    NASA Technical Reports Server (NTRS)

    Miles, R. B.; Webb, S. G.; Griffith, E. L.

    1981-01-01

    Two compound parabolic concentrators are used to create a 180-deg-field-of-view spectral filter. The collection optics are reflective and are designed to collimate the light through a multilayer interference filter and then to refocus it onto an optical detector. Assuming unit reflectance and no loss through the optical filter, this device operates at the thermodynamic collection limit.

  5. Hemispherical-field-of-view, nonimaging narrow-band spectral filter.

    PubMed

    Miles, R B; Webb, S G; Griffith, E L

    1981-12-01

    Two compound parabolic concentrators are used to create a 180 degrees -field-of-view spectral filter. The collection optics are reflective and are designed to collimate the light through a multilayer interference filter and then to refocus it onto an optical detector. Assuming unit reflectance and no loss through the optical filter, this device operates at the thermodynamic collection limit.

  6. High-field asymmetric waveform ion mobility spectrometry for mass spectrometry-based proteomics.

    PubMed

    Swearingen, Kristian E; Moritz, Robert L

    2012-10-01

    High-field asymmetric waveform ion mobility spectrometry (FAIMS) is an atmospheric pressure ion mobility technique that separates gas-phase ions by their behavior in strong and weak electric fields. FAIMS is easily interfaced with electrospray ionization and has been implemented as an additional separation mode between liquid chromatography (LC) and mass spectrometry (MS) in proteomic studies. FAIMS separation is orthogonal to both LC and MS and is used as a means of on-line fractionation to improve the detection of peptides in complex samples. FAIMS improves dynamic range and concomitantly the detection limits of ions by filtering out chemical noise. FAIMS can also be used to remove interfering ion species and to select peptide charge states optimal for identification by tandem MS. Here, the authors review recent developments in LC-FAIMS-MS and its application to MS-based proteomics.

  7. A New Optical Design for Imaging Spectroscopy

    NASA Astrophysics Data System (ADS)

    Thompson, K. L.

    2002-05-01

    We present an optical design concept for imaging spectroscopy, with some advantages over current systems. The system projects monochromatic images onto the 2-D array detector(s). Faint object and crowded field spectroscopy can be reduced first using image processing techniques, then building the spectrum, unlike integral field units where one must first extract the spectra, build data cubes from these, then reconstruct the target's integrated spectral flux. Like integral field units, all photons are detected simultaneously, unlike tunable filters which must be scanned through the wavelength range of interest and therefore pay a sensitivity pentalty. Several sample designs are presented, including an instrument optimized for measuring intermediate redshift galaxy cluster velocity dispersions, one designed for near-infrared ground-based adaptive optics, and one intended for space-based rapid follow-up of transient point sources such as supernovae and gamma ray bursts.

  8. Engineering tradeoff problems viewed as multiple objective optimizations and the VODCA methodology

    NASA Astrophysics Data System (ADS)

    Morgan, T. W.; Thurgood, R. L.

    1984-05-01

    This paper summarizes a rational model for making engineering tradeoff decisions. The model is a hybrid from the fields of social welfare economics, communications, and operations research. A solution methodology (Vector Optimization Decision Convergence Algorithm - VODCA) firmly grounded in the economic model is developed both conceptually and mathematically. The primary objective for developing the VODCA methodology was to improve the process for extracting relative value information about the objectives from the appropriate decision makers. This objective was accomplished by employing data filtering techniques to increase the consistency of the relative value information and decrease the amount of information required. VODCA is applied to a simplified hypothetical tradeoff decision problem. Possible use of multiple objective analysis concepts and the VODCA methodology in product-line development and market research are discussed.

  9. VizieR Online Data Catalog: Photometry of 3 open clusters (Cignoni+ 2011)

    NASA Astrophysics Data System (ADS)

    Cignoni, M.; Beccari, G.; Bragaglia, A.; Tosi, M.

    2012-02-01

    The three clusters were observed in service mode at the Large Binocular Telescope (LBT) on Mt Graham (Arizona) with the Large Binocular Camera (LBC) on 2008-Dec-02, and with the Device Optimized for the LOw RESolution (DOLORES) at the Italian Telescopio Nazionale Galileo (TNG) on 2009-Jan-03. There are two LBCs, one optimized for the UV-blue filters and one for the red-IR ones, mounted at each prime focus of the LBT. Each LBC uses four EEV chips (2048x4608 pixels) placed three in a row and the fourth rotated 90° with respect to the others. The field of view of the LBC is equivalent to 23x23 arcmin2, with a pixel sampling of 0.23 arcsec. (3 data files).

  10. Comparison of filtering methods for extracellular gastric slow wave recordings.

    PubMed

    Paskaranandavadivel, Niranchan; O'Grady, Gregory; Du, Peng; Cheng, Leo K

    2013-01-01

    Extracellular recordings are used to define gastric slow wave propagation. Signal filtering is a key step in the analysis and interpretation of extracellular slow wave data; however, there is controversy and uncertainty regarding the appropriate filtering settings. This study investigated the effect of various standard filters on the morphology and measurement of extracellular gastric slow waves. Experimental extracellular gastric slow waves were recorded from the serosal surface of the stomach from pigs and humans. Four digital filters: finite impulse response filter (0.05-1 Hz); Savitzky-Golay filter (0-1.98 Hz); Bessel filter (2-100 Hz); and Butterworth filter (5-100 Hz); were applied on extracellular gastric slow wave signals to compare the changes temporally (morphology of the signal) and spectrally (signals in the frequency domain). The extracellular slow wave activity is represented in the frequency domain by a dominant frequency and its associated harmonics in diminishing power. Optimal filters apply cutoff frequencies consistent with the dominant slow wave frequency (3-5 cpm) and main harmonics (up to ≈ 2 Hz). Applying filters with cutoff frequencies above or below the dominant and harmonic frequencies was found to distort or eliminate slow wave signal content. Investigators must be cognizant of these optimal filtering practices when detecting, analyzing, and interpreting extracellular slow wave recordings. The use of frequency domain analysis is important for identifying the dominant and harmonics of the signal of interest. Capturing the dominant frequency and major harmonics of slow wave is crucial for accurate representation of slow wave activity in the time domain. Standardized filter settings should be determined. © 2012 Blackwell Publishing Ltd.

  11. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  12. Optimum color filters for CCD digital cameras

    NASA Astrophysics Data System (ADS)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  13. Signal-to-noise enhancement techniques for quantum cascade absorption spectrometers employing optimal filtering and other approaches

    NASA Astrophysics Data System (ADS)

    Disselkamp, R. S.; Kelly, J. F.; Sams, R. L.; Anderson, G. A.

    Optical feedback to the laser source in tunable diode laser spectroscopy (TDLS) is known to create intensity modulation noise due to elatoning and optical feedback (i.e. multiplicative technical noise) that usually limits spectral signal-to-noise (S/N). The large technical noise often limits absorption spectroscopy to noise floors 100-fold greater than the Poisson shot noise limit due to fluctuations in the laser intensity. The high output powers generated from quantum cascade (QC) lasers, along with their high gain, makes these injection laser systems especially susceptible to technical noise. In this article we discuss a method of using optimal filtering to reduce technical noise. We have observed S/N enhancements ranging from 20% to a factor of 50. The degree to which optimal filtering enhances S/N depends on the similarity between the Fourier components of the technical noise and those of the signal, with lower S/N enhancements observed for more similar Fourier decompositions of the signal and technical noise. We also examine the linearity of optimal filtered spectra in both time and intensity. This was accomplished by creating a synthetic spectrum for the species being studied (CH4, N2O, CO2 and H2O in ambient air) utilizing line positions and linewidths with an assumed Voigt profile from a commercial database (HITRAN). Agreement better than 0.036% in wavenumber and 1.64% in intensity (up to a 260-fold intensity ratio employed) was observed. Our results suggest that rapid ex post facto digital optimal filtering can be used to enhance S/N for routine trace gas detection.

  14. Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.

    PubMed

    McMinn, Brian R

    2013-11-01

    Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. Published by Elsevier B.V.

  15. Optimality study of a gust alleviation system for light wing-loading STOL aircraft

    NASA Technical Reports Server (NTRS)

    Komoda, M.

    1976-01-01

    An analytical study was made of an optimal gust alleviation system that employs a vertical gust sensor mounted forward of an aircraft's center of gravity. Frequency domain optimization techniques were employed to synthesize the optimal filters that process the corrective signals to the flaps and elevator actuators. Special attention was given to evaluating the effectiveness of lead time, that is, the time by which relative wind sensor information should lead the actual encounter of the gust. The resulting filter is expressed as an implicit function of the prescribed control cost. A numerical example for a light wing loading STOL aircraft is included in which the optimal trade-off between performance and control cost is systematically studied.

  16. Towards a first design of a Newtonian-noise cancellation system for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Coughlin, M.; Mukund, N.; Harms, J.; Driggers, J.; Adhikari, R.; Mitra, S.

    2016-12-01

    Newtonian gravitational noise from seismic fields is predicted to be a limiting noise source at low frequency for second generation gravitational-wave detectors. Mitigation of this noise will be achieved by Wiener filtering using arrays of seismometers deployed in the vicinity of all test masses. In this work, we present optimized configurations of seismometer arrays using a variety of simplified models of the seismic field based on seismic observations at LIGO Hanford. The model that best fits the seismic measurements leads to noise reduction limited predominantly by seismometer self-noise. A first simplified design of seismic arrays for Newtonian-noise cancellation at the LIGO sites is presented, which suggests that it will be sufficient to monitor surface displacement inside the buildings.

  17. CUDA-based acceleration and BPN-assisted automation of bilateral filtering for brain MR image restoration.

    PubMed

    Chang, Herng-Hua; Chang, Yu-Ning

    2017-04-01

    Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached. © 2017 American Association of Physicists in Medicine.

  18. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    NASA Astrophysics Data System (ADS)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  19. Electric control of wave vector filtering in a hybrid magnetic-electric-barrier nanostructure

    NASA Astrophysics Data System (ADS)

    Kong, Yong-Hong; Lu, Ke-Yu; He, Ya-Ping; Liu, Xu-Hui; Fu, Xi; Li, Ai-Hua

    2018-06-01

    We theoretically investigate how to manipulate the wave vector filtering effect by a traverse electric field for electrons across a hybrid magnetic-electric-barrier nanostructure, which can be experimentally realized by depositing a ferromagnetic stripe and a Schottky-metal stripe on top and bottom of a GaAs/Al x Ga1- x As heterostructure, respectively. The wave vector filtering effect is found to be related closely to the applied electric field. Moreover, the wave vector filtering efficiency can be manipulated by changing direction or adjusting strength of the traverse electric field. Therefore, such a nanostructure can be employed as an electrically controllable electron-momentum filter for nanoelectronics applications.

  20. The invariant of the stiffness filter function with the weight filter function of the power function form

    NASA Astrophysics Data System (ADS)

    Shang, Zhen; Sui, Yun-Kang

    2012-12-01

    Based on the independent, continuous and mapping (ICM) method and homogenization method, a research model is constructed to propose and deduce a theorem and corollary from the invariant between the weight filter function and the corresponding stiffness filter function of the form of power function. The efficiency in searching for optimum solution will be raised via the choice of rational filter functions, so the above mentioned results are very important to the further study of structural topology optimization.

  1. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  2. Stochastic Adaptive Particle Beam Tracker Using Meer Filter Feedback.

    DTIC Science & Technology

    1986-12-01

    breakthrough required in controlling the beam location. In 1983, Zicker (27] conducted a feasibility study of a simple proportional gain controller... Zicker synthesized his stochastic controller designs from a deterministic optimal LQ controller assuming full state feedback. An LQ controller is a...34Merge" Method 2.5 Simlifying the eer Filter a Zicker ran a performance analysis on the Meer filter and found the Meer filter virtually insensitive to

  3. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  4. Advanced texture filtering: a versatile framework for reconstructing multi-dimensional image data on heterogeneous architectures

    NASA Astrophysics Data System (ADS)

    Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich

    2015-01-01

    Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.

  5. High-performance information search filters for acute kidney injury content in PubMed, Ovid Medline and Embase.

    PubMed

    Hildebrand, Ainslie M; Iansavichus, Arthur V; Haynes, R Brian; Wilczynski, Nancy L; Mehta, Ravindra L; Parikh, Chirag R; Garg, Amit X

    2014-04-01

    We frequently fail to identify articles relevant to the subject of acute kidney injury (AKI) when searching the large bibliographic databases such as PubMed, Ovid Medline or Embase. To address this issue, we used computer automation to create information search filters to better identify articles relevant to AKI in these databases. We first manually reviewed a sample of 22 992 full-text articles and used prespecified criteria to determine whether each article contained AKI content or not. In the development phase (two-thirds of the sample), we developed and tested the performance of >1.3-million unique filters. Filters with high sensitivity and high specificity for the identification of AKI articles were then retested in the validation phase (remaining third of the sample). We succeeded in developing and validating high-performance AKI search filters for each bibliographic database with sensitivities and specificities in excess of 90%. Filters optimized for sensitivity reached at least 97.2% sensitivity, and filters optimized for specificity reached at least 99.5% specificity. The filters were complex; for example one PubMed filter included >140 terms used in combination, including 'acute kidney injury', 'tubular necrosis', 'azotemia' and 'ischemic injury'. In proof-of-concept searches, physicians found more articles relevant to topics in AKI with the use of the filters. PubMed, Ovid Medline and Embase can be filtered for articles relevant to AKI in a reliable manner. These high-performance information filters are now available online and can be used to better identify AKI content in large bibliographic databases.

  6. Comparison of Flattening Filter (FF) and Flattening-Filter-Free (FFF) 6 MV photon beam characteristics for small field dosimetry using EGSnrc Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Sangeetha, S.; Sureka, C. S.

    2017-06-01

    The present study is focused to compare the characteristics of Varian Clinac 600 C/D flattened and unflattened 6 MV photon beams for small field dosimetry using EGSnrc Monte Carlo Simulation since the small field dosimetry is considered to be the most crucial and provoking task in the field of radiation dosimetry. A 6 MV photon beam of a Varian Clinac 600 C/D medical linear accelerator operates with Flattening Filter (FF) and Flattening-Filter-Free (FFF) mode for small field dosimetry were performed using EGSnrc Monte Carlo user codes (BEAMnrc and DOSXYZnrc) in order to calculate the beam characteristics using Educated-trial and error method. These includes: Percentage depth dose, lateral beam profile, dose rate delivery, photon energy spectra, photon beam uniformity, out-of-field dose, surface dose, penumbral dose and output factor for small field dosimetry (0.5×0.5 cm2 to 4×4 cm2) and are compared with magna-field sizes (5×5 cm2 to 40×40 cm2) at various depths. The results obtained showed that the optimized beam energy and Full-width-half maximum value for small field dosimetry and magna-field dosimetry was found to be 5.7 MeV and 0.13 cm for both FF and FFF beams. The depth of dose maxima for small field size deviates minimally for both FF and FFF beams similar to magna-fields. The depths greater than dmax depicts a steeper dose fall off in the exponential region for FFF beams comparing FF beams where its deviations gets increased with the increase in field size. The shape of the lateral beam profiles of FF and FFF beams varies remains similar for the small field sizes less than 4×4 cm2 whereas it varies in the case of magna-fields. Dose rate delivery for FFF beams shows an eminent increase with a two-fold factor for both small field dosimetry and magna-field sizes. The surface dose measurements of FFF beams for small field size were found to be higher whereas it gets lower for magna-fields than FF beam. The amount of out-of-field dose reduction gets increased with the increase in field size. It is also observed that the photon energy spectrum gets increased with the increase in field size for FFF beam mode. Finally, the output factors for FFF beams were relatively quite low for small field sizes than FF beams whereas it gets higher for magna-field sizes. From this study, it is concluded that the FFF beams depicted minimal deviations in the treatment field region irrespective to the normal tissue region for small field dosimetry compared to FF beams. The more prominent result observed from the study is that the shape of the beam profile remains similar for FF and FFF beams in the case of smaller field size that leads to more accurate treatment planning in the case of IMRT (Image-Guided Radiation Therapy), IGAT (Image-Guided Adaptive Radiation Therapy), SBRT (Stereotactic Body Radiation Therapy), SRS (Stereotactic Radio Surgery), and Tomotherapy techniques where homogeneous dose is not necessary. On the whole, the determination of dosimetric beam characteristics of Varian linac machine using Monte Carlo simulation provides accurate dose calculation as the clinical golden data.

  7. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  8. The correct estimate of the probability of false detection of the matched filter in weak-signal detection problems

    NASA Astrophysics Data System (ADS)

    Vio, R.; Andreani, P.

    2016-05-01

    The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.

  9. Optimal design of a bank of spatio-temporal filters for EEG signal classification.

    PubMed

    Higashi, Hiroshi; Tanaka, Toshihisa

    2011-01-01

    The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery.

  10. Robust extrema features for time-series data analysis.

    PubMed

    Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N

    2013-06-01

    The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.

  11. SU-E-T-216: Comparison of Volumetrically Modulated Arc Therapy Treatment Using Flattening Filter Free Beams Vs. Flattened Beams for Partial Brain Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, S; Roa, D; Hanna, N

    2015-06-15

    Purpose: Flattening Filter Free (FFF) beams offer the potential for higher dose rates, short treatment time, and lower out of field dose. Therefore, the aim of this study was to investigate the dosimetric effects and out of field dose of Volumetric Modulated Arc Therapy (VMAT) plans using FFF vs Flattening Filtering (FF) beams for partial brain irradiation. Methods: Ten brain patients treated with a 6FF beam from a Truebeam STX were analyzed retrospectively for this study. These plans (46Gy at 2 Gy per fraction) were re-optimized for 6FFF beams using the same dose constraints as the original plans. PTV coverage,more » PTV Dmax, total MUs, and mean dose to organs-at-risk (OAR) were evaluated. In addition, the out-of-field dose for 6FF and 6FFF plans for one patient was measured on an anthropomorphic phantom. TLDs were placed inside (central axis) and outside (surface) the phantom at distances ranging from 0.5 cm to 17 cm from the field edge. Paired T-test was used for statistical analysis. Results: PTV coverage and PTV Dmax were comparable for the FF and FFF plans with 95.9% versus 95.6% and 111.2% versus 111.9%, respectively. Mean dose to the OARs were 3.7% less for FFF than FF plans (p<0.0001). Total MUs were, on average, 12.5% greater for FFF than FF plans with 481±55 MU (FFF) versus 429±50 MU (FF), p=0.0003. On average, the measured out of field dose was 24% less for FFF compared to FF, p<0.0001. A similar beam-on time was observed for the FFF and FF treatment. Conclusion: It is beneficial to use 6FFF beams for regular fractionated brain VMAT treatments. VMAT treatment plans using FFF beams can achieve comparable PTV coverage but with more OAR sparing. The out of field dose is significant less with mean reduction of 24%.« less

  12. Optimization of tungsten x-ray spectra for digital mammography: a comparison of model to experiment

    NASA Astrophysics Data System (ADS)

    Andre, Michael P.; Spivey, Brett A.

    1997-05-01

    Tungsten (W) target x-rays tubes are being studied for use in digital mammography to improve x-ray flux, reduce noise and increase tube heat capacity. A parametric model was developed for digital mammography to evaluate optimization of x-ray spectra for a particular sensor. The model computes spectra and mean glandular doses (MGD) for combinations of W target, beam filters, kVp, breast type and thickness. Two figures of merit were defined: (signal/noise)2/MGD and spectral quantum efficiency; these were computed as a means to approach optimization of object contrast. The model is derived from a combination of classic equations, XCOM from NBS, and published data. X-ray spectra were calculated and measured for filters of Al, Sn, Rh, Mo and Ag on a Eureka tube. (Signal/noise)2/MGD was measured for a filtered W target tube and a digital camera employing CsI scintillator optically coupled to a CCD for which the detective quantum efficiency (DQE) was known. A 3-mm thick acrylic disk was imaged on thickness of 3-8 cm of acrylic and the results were compared to the predictions of the model. The relative error between predicted and measured spectra was +/- 2 percent from 24 to 34 kVp. Calculated MGD as a function of breast thickness, half-value layer and beam filter compares very well to published data. Best performance was found for the following combinations: Mo filter with 30 mm breast, Ag filter with 45 mm, Sn filter for 60 mm, and Al filter for 75 mm thick breast. The parametric model agrees well with measurement and provides a means to explore optimum combinations of kVp and beam filter. For a particular detector, this data may be used with the DQE to estimate total system signal-to-noise ratio for a particular imaging task.

  13. Development of genetic algorithm-based optimization module in WHAT system for hydrograph analysis and model application

    NASA Astrophysics Data System (ADS)

    Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.

    2010-07-01

    Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.

  14. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  15. Thermal control design of the Lightning Mapper Sensor narrow-band spectral filter

    NASA Technical Reports Server (NTRS)

    Flannery, Martin R.; Potter, John; Raab, Jeff R.; Manlief, Scott K.

    1992-01-01

    The performance of the Lightning Mapper Sensor is dependent on the temperature shifts of its narrowband spectral filter. To perform over a 10 degree FOV with an 0.8 nm bandwidth, the filter must be 15 cm in diameter and mounted externally to the telescope optics. The filter thermal control required a filter design optimized for minimum bandpass shift with temperature, a thermal analysis of substrate materials for maximum temperature uniformity, and a thermal radiation analysis to determine the parameter sensitivity of the radiation shield for the filter, the filter thermal recovery time after occultation, and heater power to maintain filter performance in the earth-staring geosynchronous environment.

  16. The new approach for infrared target tracking based on the particle filter algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Hang; Han, Hong-xia

    2011-08-01

    Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.

  17. Design considerations for near-infrared filter photometry: effects of noise sources and selectivity.

    PubMed

    Tarumi, Toshiyasu; Amerov, Airat K; Arnold, Mark A; Small, Gary W

    2009-06-01

    Optimal filter design of two-channel near-infrared filter photometers is investigated for simulated two-component systems consisting of an analyte and a spectrally overlapping interferent. The degree of overlap between the analyte and interferent bands is varied over three levels. The optimal design is obtained for three cases: a source or background flicker noise limited case, a shot noise limited case, and a detector noise limited case. Conventional photometers consist of narrow-band optical filters with their bands located at discrete wavelengths. However, the use of broadband optical filters with overlapping responses has been proposed to obtain as much signal as possible from a weak and broad analyte band typical of near-infrared absorptions. One question regarding the use of broadband optical filters with overlapping responses is the selectivity achieved by such filters. The selectivity of two-channel photometers is evaluated on the basis of the angle between the analyte and interferent vectors in the space spanned by the relative change recorded for each of the two detector channels. This study shows that for the shot noise limited or detector noise limited cases, the slight decrease in selectivity with the use of broadband optical filters can be compensated by the higher signal-to-noise ratio afforded by the use of such filters. For the source noise limited case, the best quantitative results are obtained with the use of narrow-band non-overlapping optical filters.

  18. Sinogram noise reduction for low-dose CT by statistics-based nonlinear filters

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Lu, Hongbing; Li, Tianfang; Liang, Zhengrong

    2005-04-01

    Low-dose CT (computed tomography) sinogram data have been shown to be signal-dependent with an analytical relationship between the sample mean and sample variance. Spatially-invariant low-pass linear filters, such as the Butterworth and Hanning filters, could not adequately handle the data noise and statistics-based nonlinear filters may be an alternative choice, in addition to other choices of minimizing cost functions on the noisy data. Anisotropic diffusion filter and nonlinear Gaussian filters chain (NLGC) are two well-known classes of nonlinear filters based on local statistics for the purpose of edge-preserving noise reduction. These two filters can utilize the noise properties of the low-dose CT sinogram for adaptive noise reduction, but can not incorporate signal correlative information for an optimal regularized solution. Our previously-developed Karhunen-Loeve (KL) domain PWLS (penalized weighted least square) minimization considers the signal correlation via the KL strategy and seeks the PWLS cost function minimization for an optimal regularized solution for each KL component, i.e., adaptive to the KL components. This work compared the nonlinear filters with the KL-PWLS framework for low-dose CT application. Furthermore, we investigated the nonlinear filters for post KL-PWLS noise treatment in the sinogram space, where the filters were applied after ramp operation on the KL-PWLS treated sinogram data prior to backprojection operation (for image reconstruction). By both computer simulation and experimental low-dose CT data, the nonlinear filters could not outperform the KL-PWLS framework. The gain of post KL-PWLS edge-preserving noise filtering in the sinogram space is not significant, even the noise has been modulated by the ramp operation.

  19. Least squares restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

    1991-01-01

    Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

  20. A CANDLE for a deeper in vivo insight

    PubMed Central

    Coupé, Pierrick; Munz, Martin; Manjón, Jose V; Ruthazer, Edward S; Louis Collins, D.

    2012-01-01

    A new Collaborative Approach for eNhanced Denoising under Low-light Excitation (CANDLE) is introduced for the processing of 3D laser scanning multiphoton microscopy images. CANDLE is designed to be robust for low signal-to-noise ratio (SNR) conditions typically encountered when imaging deep in scattering biological specimens. Based on an optimized non-local means filter involving the comparison of filtered patches, CANDLE locally adapts the amount of smoothing in order to deal with the noise inhomogeneity inherent to laser scanning fluorescence microscopy images. An extensive validation on synthetic data, images acquired on microspheres and in vivo images is presented. These experiments show that the CANDLE filter obtained competitive results compared to a state-of-the-art method and a locally adaptive optimized nonlocal means filter, especially under low SNR conditions (PSNR<8dB). Finally, the deeper imaging capabilities enabled by the proposed filter are demonstrated on deep tissue in vivo images of neurons and fine axonal processes in the Xenopus tadpole brain. PMID:22341767

  1. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  2. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  3. Development and Validation of Search Filters to Identify Articles on Family Medicine in Online Medical Databases

    PubMed Central

    Pols, David H.J.; Bramer, Wichor M.; Bindels, Patrick J.E.; van de Laar, Floris A.; Bohnen, Arthur M.

    2015-01-01

    Physicians and researchers in the field of family medicine often need to find relevant articles in online medical databases for a variety of reasons. Because a search filter may help improve the efficiency and quality of such searches, we aimed to develop and validate search filters to identify research studies of relevance to family medicine. Using a new and objective method for search filter development, we developed and validated 2 search filters for family medicine. The sensitive filter had a sensitivity of 96.8% and a specificity of 74.9%. The specific filter had a specificity of 97.4% and a sensitivity of 90.3%. Our new filters should aid literature searches in the family medicine field. The sensitive filter may help researchers conducting systematic reviews, whereas the specific filter may help family physicians find answers to clinical questions at the point of care when time is limited. PMID:26195683

  4. Space telescope optical telescope assembly/scientific instruments. Phase B: Preliminary design and program definition study. Volume 2A(3): Astrometry

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Wide field measurements, namely, measurements of relative angular separations between stars over a relatively wide field for parallax and proper motion determinations, were made with the third fine guidance sensor. Narrow field measurements, i.e., double star measurements, are accomplished primarily with the area photometer or faint object camera at f/96. The wavelength range required can be met by the fine guidance sensor which has a spectral coverage from 3000 to 7500 A. The field of view of the fine guidance sensor also exceeds that required for the wide field astrometric instrument. Requirements require a filter wheel for the wide field astrometer, and so one was incorporated into the design of the fine guidance sensor. The filter wheel probably would contain two neutral density filters to extend the dynamic range of the sensor and three spectral filters for narrowing effective double star magnitude difference.

  5. Blurred image restoration using knife-edge function and optimal window Wiener filtering.

    PubMed

    Wang, Min; Zhou, Shudao; Yan, Wei

    2018-01-01

    Motion blur in images is usually modeled as the convolution of a point spread function (PSF) and the original image represented as pixel intensities. The knife-edge function can be used to model various types of motion-blurs, and hence it allows for the construction of a PSF and accurate estimation of the degradation function without knowledge of the specific degradation model. This paper addresses the problem of image restoration using a knife-edge function and optimal window Wiener filtering. In the proposed method, we first calculate the motion-blur parameters and construct the optimal window. Then, we use the detected knife-edge function to obtain the system degradation function. Finally, we perform Wiener filtering to obtain the restored image. Experiments show that the restored image has improved resolution and contrast parameters with clear details and no discernible ringing effects.

  6. Blurred image restoration using knife-edge function and optimal window Wiener filtering

    PubMed Central

    Zhou, Shudao; Yan, Wei

    2018-01-01

    Motion blur in images is usually modeled as the convolution of a point spread function (PSF) and the original image represented as pixel intensities. The knife-edge function can be used to model various types of motion-blurs, and hence it allows for the construction of a PSF and accurate estimation of the degradation function without knowledge of the specific degradation model. This paper addresses the problem of image restoration using a knife-edge function and optimal window Wiener filtering. In the proposed method, we first calculate the motion-blur parameters and construct the optimal window. Then, we use the detected knife-edge function to obtain the system degradation function. Finally, we perform Wiener filtering to obtain the restored image. Experiments show that the restored image has improved resolution and contrast parameters with clear details and no discernible ringing effects. PMID:29377950

  7. STEPS: a grid search methodology for optimized peptide identification filtering of MS/MS database search results.

    PubMed

    Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D

    2013-03-01

    For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Mass peak shape improvement of a quadrupole mass filter when operating with a rectangular wave power supply.

    PubMed

    Luo, Chan; Jiang, Dan; Ding, Chuan-Fan; Konenkov, Nikolai V

    2009-09-01

    Numeric experiments were performed to study the first and second stability regions and find the optimal configurations of a quadrupole mass filter constructed of circular quadrupole rods with a rectangular wave power supply. The ion transmission contours were calculated using ion trajectory simulations. For the first stability region, the optimal rod set configuration and the ratio r/r(0) is 1.110-1.115; for the second stability region, it is 1.128-1.130. Low-frequency direct current (DC) modulation with the parameters of m = 0.04-0.16 and nu = omega/Omega = 1/8-1/14 improves the mass peak shape of the circular rod quadrupole mass filter at the optimal r/r(0) ratio of 1.130. The amplitude modulation does not improve mass peak shape. Copyright (c) 2009 John Wiley & Sons, Ltd.

  9. Design and Implementation of Embedded Computer Vision Systems Based on Particle Filters

    DTIC Science & Technology

    2010-01-01

    for hardware/software implementa- tion of multi-dimensional particle filter application and we explore this in the third application which is a 3D...methodology for hardware/software implementation of multi-dimensional particle filter application and we explore this in the third application which is a...and hence multiprocessor implementation of parti- cle filters is an important option to examine. A significant body of work exists on optimizing generic

  10. Inferring neural activity from BOLD signals through nonlinear optimization.

    PubMed

    Vakorin, Vasily A; Krakovska, Olga O; Borowsky, Ron; Sarty, Gordon E

    2007-11-01

    The blood oxygen level-dependent (BOLD) fMRI signal does not measure neuronal activity directly. This fact is a key concern for interpreting functional imaging data based on BOLD. Mathematical models describing the path from neural activity to the BOLD response allow us to numerically solve the inverse problem of estimating the timing and amplitude of the neuronal activity underlying the BOLD signal. In fact, these models can be viewed as an advanced substitute for the impulse response function. In this work, the issue of estimating the dynamics of neuronal activity from the observed BOLD signal is considered within the framework of optimization problems. The model is based on the extended "balloon" model and describes the conversion of neuronal signals into the BOLD response through the transitional dynamics of the blood flow-inducing signal, cerebral blood flow, cerebral blood volume and deoxyhemoglobin concentration. Global optimization techniques are applied to find a control input (the neuronal activity and/or the biophysical parameters in the model) that causes the system to follow an admissible solution to minimize discrepancy between model and experimental data. As an alternative to a local linearization (LL) filtering scheme, the optimization method escapes the linearization of the transition system and provides a possibility to search for the global optimum, avoiding spurious local minima. We have found that the dynamics of the neural signals and the physiological variables as well as the biophysical parameters can be robustly reconstructed from the BOLD responses. Furthermore, it is shown that spiking off/on dynamics of the neural activity is the natural mathematical solution of the model. Incorporating, in addition, the expansion of the neural input by smooth basis functions, representing a low-pass filtering, allows us to model local field potential (LFP) solutions instead of spiking solutions.

  11. Supervoxels for graph cuts-based deformable image registration using guided image filtering

    NASA Astrophysics Data System (ADS)

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-11-01

    We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.

  12. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering.

    PubMed

    Szmul, Adam; Papież, Bartłomiej W; Hallack, Andre; Grau, Vicente; Schnabel, Julia A

    2017-10-04

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.

  13. Research on the shortwave infrared hyperspectral imaging technology based on Integrated Stepwise filter

    NASA Astrophysics Data System (ADS)

    Wei, Liqing; Xiao, Xizhong; Wang, Yueming; Zhuang, Xiaoqiong; Wang, Jianyu

    2017-11-01

    Space-borne hyperspectral imagery is an important tool for earth sciences and industrial applications. Higher spatial and spectral resolutions have been sought persistently, although this results in more power, larger volume and weight during a space-borne spectral imager design. For miniaturization of hyperspectral imager and optimization of spectral splitting methods, several methods are compared in this paper. Spectral time delay integration (TDI) method with high transmittance Integrated Stepwise Filter (ISF) is proposed.With the method, an ISF imaging spectrometer with TDI could achieve higher system sensitivity than the traditional prism/grating imaging spectrometer. In addition, the ISF imaging spectrometer performs well in suppressing infrared background radiation produced by instrument. A compact shortwave infrared (SWIR) hyperspectral imager prototype based on HgCdTe covering the spectral range of 2.0-2.5 μm with 6 TDI stages was designed and integrated. To investigate the performance of ISF spectrometer, a method to derive the optimal blocking band curve of the ISF is introduced, along with known error characteristics. To assess spectral performance of the ISF system, a new spectral calibration based on blackbody radiation with temperature scanning is proposed. The results of the imaging experiment showed the merits of ISF. ISF has great application prospects in the field of high sensitivity and high resolution space-borne hyperspectral imagery.

  14. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering

    PubMed Central

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-01-01

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model ‘sliding motion’. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark. PMID:29225433

  15. SU-F-I-73: Surface Dose from KV Diagnostic Beams From An On-Board Imager On a Linac Machine Using Different Imaging Techniques and Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Hossain, S; Syzek, E

    Purpose: To quantitatively investigate the surface dose deposited in patients imaged with a kV on-board-imager mounted on a radiotherapy machine using different clinical imaging techniques and filters. Methods: A high sensitivity photon diode is used to measure the surface dose on central-axis and at an off-axis-point which is mounted on the top of a phantom setup. The dose is measured for different imaging techniques that include: AP-Pelvis, AP-Head, AP-Abdomen, AP-Thorax, and Extremity. The dose measurements from these imaging techniques are combined with various filtering techniques that include: no-filter (open-field), half-fan bowtie (HF), full-fan bowtie (FF) and Cu-plate filters. The relativemore » surface dose for different imaging and filtering techniques is evaluated quantiatively by the ratio of the dose relative to the Cu-plate filter. Results: The lowest surface dose is deposited with the Cu-plate filter. The highest surface dose deposited results from open fields without filter and it is nearly a factor of 8–30 larger than the corresponding imaging technique with the Cu-plate filter. The AP-Abdomen technique delivers the largest surface dose that is nearly 2.7 times larger than the AP-Head technique. The smallest surface dose is obtained from the Extremity imaging technique. Imaging with bowtie filters decreases the surface dose by nearly 33% in comparison with the open field. The surface doses deposited with the HF or FF-bowtie filters are within few percentages. Image-quality of the radiographic images obtained from the different filtering techniques is similar because the Cu-plate eliminates low-energy photons. The HF- and FF-bowtie filters generate intensity-gradients in the radiographs which affects image-quality in the different imaging technique. Conclusion: Surface dose from kV-imaging decreases significantly with the Cu-plate and bowtie-filters compared to imaging without filters using open-field beams. The use of Cu-plate filter does not affect image-quality and may be used as the default in the different imaging techniques.« less

  16. Divergence Free High Order Filter Methods for Multiscale Non-ideal MHD Flows

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2003-01-01

    Low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field (Delta . B) numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.

  17. Active field control (AFC) -electro-acoustic enhancement system using acoustical feedback control

    NASA Astrophysics Data System (ADS)

    Miyazaki, Hideo; Watanabe, Takayuki; Kishinaga, Shinji; Kawakami, Fukushi

    2003-10-01

    AFC is an electro-acoustic enhancement system using FIR filters to optimize auditory impressions, such as liveness, loudness, and spaciousness. This system has been under development at Yamaha Corporation for more than 15 years and has been installed in approximately 50 venues in Japan to date. AFC utilizes feedback control techniques for recreation of reverberation from the physical reverberation of the room. In order to prevent coloration problems caused by a closed loop condition, two types of time-varying control techniques are implemented in the AFC system to ensure smooth loop gain and a sufficient margin in frequency characteristics to prevent instability. Those are: (a) EMR (electric microphone rotator) -smoothing frequency responses between microphones and speakers by changing the combinations of inputs and outputs periodically; (b) fluctuating-FIR -smoothing frequency responses of FIR filters and preventing coloration problems caused by fixed FIR filters, by moving each FIR tap periodically on time axis with a different phase and time period. In this paper, these techniques are summarized. A block diagram of AFC using new equipment named AFC1, which has been developed at Yamaha Corporation and released recently in the US, is also presented.

  18. Desorption of micropollutant from spent carbon filters used for water purifier.

    PubMed

    Kwon, Da-Sol; Tak, So-Yeon; Lee, Jung-Eun; Kim, Moon-Kyung; Lee, Young Hwa; Han, Doo Won; Kang, Sanghyeon; Zoh, Kyung-Duk

    2017-07-01

    In this study, to examine the accumulated micropollutants in the spent carbon filter used in the water purifier, first, the method to desorb micropollutant from the activated carbon was developed and optimized. Then, using this optimized desorption conditions, we examined which micropollutants exist in spent carbon filters collected from houses in different regions in Korea where water purifiers were used. A total of 11 micropollutants (caffeine (CFF), acetaminophen (ACT), sulfamethazine (SMA), sulfamethoxazole (SMZ), metoprolol (MTP), carbamazepine (CBM), naproxen (NPX), bisphenol-A (BPA), ibuprofen (IBU), diclofenac (DCF), and triclocarban (TCB)) were analyzed using LC/MS-MS from the spent carbon filters. CFF, NPX, and DCF had the highest detection frequencies (>60%) in the carbon filters (n = 100), whereas SMA, SMZ, and MTP were only detected in the carbon filters, but not in the tap waters (n = 25), indicating that these micropollutants, which exist less than the detection limit in tap water, were accumulated in the carbon filters. The regional micropollutant detection patterns in the carbon filters showed higher levels of micropollutants, especially NPX, BPA, IBU, and DCF, in carbon filters collected in the Han River and Nakdong River basins where large cities exist. The levels of micropollutants in the carbon filter were generally lower in the regions where advanced oxidation processes (AOPs) were employed at nearby water treatment plants (WTPs), indicating that AOP process in WTP is quite effective in removing micropollutant. Our results suggest that desorption of micropollutant from the carbon filter used can be a tool to identify micropollutants present in tap water with trace amounts or below the detection limit.

  19. On the matrix Fourier filtering problem for a class of models of nonlinear optical systems with a feedback

    NASA Astrophysics Data System (ADS)

    Razgulin, A. V.; Sazonova, S. V.

    2017-09-01

    A novel statement of the Fourier filtering problem based on the use of matrix Fourier filters instead of conventional multiplier filters is considered. The basic properties of the matrix Fourier filtering for the filters in the Hilbert-Schmidt class are established. It is proved that the solutions with a finite energy to the periodic initial boundary value problem for the quasi-linear functional differential diffusion equation with the matrix Fourier filtering Lipschitz continuously depend on the filter. The problem of optimal matrix Fourier filtering is formulated, and its solvability for various classes of matrix Fourier filters is proved. It is proved that the objective functional is differentiable with respect to the matrix Fourier filter, and the convergence of a version of the gradient projection method is also proved.

  20. Dual-domain point diffraction interferometer

    DOEpatents

    Naulleau, Patrick P.; Goldberg, Kenneth Alan

    2000-01-01

    A hybrid spatial/temporal-domain point diffraction interferometer (referred to as the dual-domain PS/PDI) that is capable of suppressing the scattered-reference-light noise that hinders the conventional PS/PDI is provided. The dual-domain PS/PDI combines the separate noise-suppression capabilities of the widely-used phase-shifting and Fourier-transform fringe pattern analysis methods. The dual-domain PS/PDI relies on both a more restrictive implementation of the image plane PS/PDI mask and a new analysis method to be applied to the interferograms generated and recorded by the modified PS/PDI. The more restrictive PS/PDI mask guarantees the elimination of spatial-frequency crosstalk between the signal and the scattered-light noise arising from scattered-reference-light interfering with the test beam. The new dual-domain analysis method is then used to eliminate scattered-light noise arising from both the scattered-reference-light interfering with the test beam and the scattered-reference-light interfering with the "true" pinhole-diffracted reference light. The dual-domain analysis method has also been demonstrated to provide performance enhancement when using the non-optimized standard PS/PDI design. The dual-domain PS/PDI is essentially a three-tiered filtering system composed of lowpass spatial-filtering the test-beam electric field using the more restrictive PS/PDI mask, bandpass spatial-filtering the individual interferogram irradiance frames making up the phase-shifting series, and bandpass temporal-filtering the phase-shifting series as a whole.

  1. Absolute Positioning Using The Earth’s Magnetic Anomaly Field

    DTIC Science & Technology

    2016-09-15

    many of these limitations. We present a navigation filter which uses the Earth’s magnetic anomaly field as a navigation signal to aid an inertial...navigation system (INS) in an aircraft. The filter utilizes highly-accurate optically pumped cesium (OPC) magnetometers to make scalar intensity...measurements of the Earth’s magnetic field and compare them to a map using a marginalized particle filter approach. The accuracy of these mea- surements allows

  2. Interferometers adaptations to lidars

    NASA Technical Reports Server (NTRS)

    Porteneuve, J.

    1992-01-01

    To perform daytime measurements of the density and temperature by Rayleigh lidar, it is necessary to select the wavelength with a very narrow spectral system. This filter is composed by an interference filter and a Fabry Perot etalon. The Fabry Perot etalon is the more performent compound, and it is necessary to build a specific optic around it. The image of the entrance pupil or the field diaphragm is at the infinite and the other diaphragm is on the etalon. The optical quality of the optical system is linked to the spectral resolution of the system to optimize the reduction of the field of view. The resolution is given by the formula: R = 8(xD/Fd)exp 2 where R = lambda/delta(lambda), x = diameter of the field diaphragm, D = diameter of the reception mirror, F = focal length of the telescope, and d = useful diameter of the etalon. In the Doppler Rayleigh lidars, the PF interferometer is the main part of the experiment and the exact spectral adaptation is the most critical problem. In the spectral adaptation of interferometers, the transmittance of the system will be acceptable if the etalon is exactly adjusted to the wavelength of the laser. It is necessary to work with a monomode laser, and adjust the shift to the bandpass of the interferometer. We are working with an interferometer built with molecular optical contact. This interferometer is put in a special pressure closed chamber.

  3. Modeling of a tilted pressure-tuned field-widened Michelson interferometer for application in high spectral resolution lidar

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Hostetler, Chris; Miller, Ian; Cook, Anthony; Hair, Jonathan

    2011-10-01

    High spectral resolution lidars (HSRLs) designed for aerosol and cloud remote sensing are increasingly being deployed on aircraft and called for on future space-based missions. The HSRL technique relies on spectral discrimination of the atmospheric backscatter signals to enable independent, unambiguous retrieval of aerosol extinction and backscatter. A compact, monolithic field-widened Michelson interferometer is being developed as the spectral discrimination filter for an HSRL system at NASA Langley Research Center. The Michelson interferometer consists of a cubic beam splitter, a solid glass arm, and an air arm. The spacer that connects the air arm mirror to the main part of the interferometer is designed to optimize thermal compensation such that the frequency of maximum interference can be tuned with great precision to the transmitted laser wavelength. In this paper, a comprehensive radiometric model for the field-widened Michelson interferometeric spectral filter is presented. The model incorporates the angular distribution and finite cross sectional area of the light source, reflectance of all surfaces, loss of absorption, and lack of parallelism between the airarm and solid arm, etc. The model can be used to assess the performance of the interferometer and thus it is a useful tool to evaluate performance budgets and to set optical specifications for new designs of the same basic interferometer type.

  4. A novel background field removal method for MRI using projection onto dipole fields (PDF).

    PubMed

    Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi

    2011-11-01

    For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Entanglement-assisted quantum feedback control

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naoki; Mikami, Tomoaki

    2017-07-01

    The main advantage of quantum metrology relies on the effective use of entanglement, which indeed allows us to achieve strictly better estimation performance over the standard quantum limit. In this paper, we propose an analogous method utilizing entanglement for the purpose of feedback control. The system considered is a general linear dynamical quantum system, where the control goal can be systematically formulated as a linear quadratic Gaussian control problem based on the quantum Kalman filtering method; in this setting, an entangled input probe field is effectively used to reduce the estimation error and accordingly the control cost function. In particular, we show that, in the problem of cooling an opto-mechanical oscillator, the entanglement-assisted feedback control can lower the stationary occupation number of the oscillator below the limit attainable by the controller with a coherent probe field and furthermore beats the controller with an optimized squeezed probe field.

  6. Fault Diagnosis of Rolling Bearing Based on Fast Nonlocal Means and Envelop Spectrum

    PubMed Central

    Lv, Yong; Zhu, Qinglin; Yuan, Rui

    2015-01-01

    The nonlocal means (NL-Means) method that has been widely used in the field of image processing in recent years effectively overcomes the limitations of the neighborhood filter and eliminates the artifact and edge problems caused by the traditional image denoising methods. Although NL-Means is very popular in the field of 2D image signal processing, it has not received enough attention in the field of 1D signal processing. This paper proposes a novel approach that diagnoses the fault of a rolling bearing based on fast NL-Means and the envelop spectrum. The parameters of the rolling bearing signals are optimized in the proposed method, which is the key contribution of this paper. This approach is applied to the fault diagnosis of rolling bearing, and the results have shown the efficiency at detecting roller bearing failures. PMID:25585105

  7. High Field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS) for Mass Spectrometry-Based Proteomics

    PubMed Central

    Swearingen, Kristian E.; Moritz, Robert L.

    2013-01-01

    SUMMARY High field asymmetric waveform ion mobility spectrometry (FAIMS) is an atmospheric pressure ion mobility technique that separates gas-phase ions by their behavior in strong and weak electric fields. FAIMS is easily interfaced with electrospray ionization and has been implemented as an additional separation mode between liquid chromatography (LC) and mass spectrometry (MS) in proteomic studies. FAIMS separation is orthogonal to both LC and MS and is used as a means of on-line fractionation to improve detection of peptides in complex samples. FAIMS improves dynamic range and concomitantly the detection limits of ions by filtering out chemical noise. FAIMS can also be used to remove interfering ion species and to select peptide charge states optimal for identification by tandem MS. Here, we review recent developments in LC-FAIMS-MS and its application to MS-based proteomics. PMID:23194268

  8. Comparison of MERV 16 and HEPA filters for cab filtration of underground mining equipment.

    PubMed

    Cecala, A B; Organiscak, J A; Noll, J D; Zimmer, J A

    2016-08-01

    Significant strides have been made in optimizing the design of filtration and pressurization systems used on the enclosed cabs of mobile mining equipment to reduce respirable dust and provide the best air quality to the equipment operators. Considering all of the advances made in this area, one aspect that still needed to be evaluated was a comparison of the efficiencies of the different filters used in these systems. As high-efficiency particulate arrestance (HEPA) filters provide the highest filtering efficiency, the general assumption would be that they would also provide the greatest level of protection to workers. Researchers for the U.S. National Institute for Occupational Safety and Health (NIOSH) speculated, based upon a previous laboratory study, that filters with minimum efficiency reporting value, or MERV rating, of 16 may be a more appropriate choice than HEPA filters in most cases for the mining industry. A study was therefore performed comparing HEPA and MERV 16 filters on two kinds of underground limestone mining equipment, a roof bolter and a face drill, to evaluate this theory. Testing showed that, at the 95-percent confidence level, there was no statistical difference between the efficiencies of the two types of filters on the two kinds of mining equipment. As the MERV 16 filters were less restrictive, provided greater airflow and cab pressurization, cost less and required less-frequent replacement than the HEPA filters, the MERV 16 filters were concluded to be the optimal choice for both the roof bolter and the face drill in this comparative-analysis case study. Another key finding of this study is the substantial improvement in the effectiveness of filtration and pressurization systems when using a final filter design.

  9. Comparison of MERV 16 and HEPA filters for cab filtration of underground mining equipment

    PubMed Central

    Cecala, A.B.; Organiscak, J.A.; Noll, J.D.; Zimmer, J.A.

    2016-01-01

    Significant strides have been made in optimizing the design of filtration and pressurization systems used on the enclosed cabs of mobile mining equipment to reduce respirable dust and provide the best air quality to the equipment operators. Considering all of the advances made in this area, one aspect that still needed to be evaluated was a comparison of the efficiencies of the different filters used in these systems. As high-efficiency particulate arrestance (HEPA) filters provide the highest filtering efficiency, the general assumption would be that they would also provide the greatest level of protection to workers. Researchers for the U.S. National Institute for Occupational Safety and Health (NIOSH) speculated, based upon a previous laboratory study, that filters with minimum efficiency reporting value, or MERV rating, of 16 may be a more appropriate choice than HEPA filters in most cases for the mining industry. A study was therefore performed comparing HEPA and MERV 16 filters on two kinds of underground limestone mining equipment, a roof bolter and a face drill, to evaluate this theory. Testing showed that, at the 95-percent confidence level, there was no statistical difference between the efficiencies of the two types of filters on the two kinds of mining equipment. As the MERV 16 filters were less restrictive, provided greater airflow and cab pressurization, cost less and required less-frequent replacement than the HEPA filters, the MERV 16 filters were concluded to be the optimal choice for both the roof bolter and the face drill in this comparative-analysis case study. Another key finding of this study is the substantial improvement in the effectiveness of filtration and pressurization systems when using a final filter design. PMID:27524838

  10. SkyMapper Filter Set: Design and Fabrication of Large-Scale Optical Filters

    NASA Astrophysics Data System (ADS)

    Bessell, Michael; Bloxham, Gabe; Schmidt, Brian; Keller, Stefan; Tisserand, Patrick; Francis, Paul

    2011-07-01

    The SkyMapper Southern Sky Survey will be conducted from Siding Spring Observatory with u, v, g, r, i, and z filters that comprise glued glass combination filters with dimensions of 309 × 309 × 15 mm. In this article we discuss the rationale for our bandpasses and physical characteristics of the filter set. The u, v, g, and z filters are entirely glass filters, which provide highly uniform bandpasses across the complete filter aperture. The i filter uses glass with a short-wave pass coating, and the r filter is a complete dielectric filter. We describe the process by which the filters were constructed, including the processes used to obtain uniform dielectric coatings and optimized narrowband antireflection coatings, as well as the technique of gluing the large glass pieces together after coating using UV transparent epoxy cement. The measured passbands, including extinction and CCD QE, are presented.

  11. Frequency-Modulated, Continuous-Wave Laser Ranging Using Photon-Counting Detectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Barber, Zeb W.; Dahl, Jason

    2014-01-01

    Optical ranging is a problem of estimating the round-trip flight time of a phase- or amplitude-modulated optical beam that reflects off of a target. Frequency- modulated, continuous-wave (FMCW) ranging systems obtain this estimate by performing an interferometric measurement between a local frequency- modulated laser beam and a delayed copy returning from the target. The range estimate is formed by mixing the target-return field with the local reference field on a beamsplitter and detecting the resultant beat modulation. In conventional FMCW ranging, the source modulation is linear in instantaneous frequency, the reference-arm field has many more photons than the target-return field, and the time-of-flight estimate is generated by balanced difference- detection of the beamsplitter output, followed by a frequency-domain peak search. This work focused on determining the maximum-likelihood (ML) estimation algorithm when continuous-time photoncounting detectors are used. It is founded on a rigorous statistical characterization of the (random) photoelectron emission times as a function of the incident optical field, including the deleterious effects caused by dark current and dead time. These statistics enable derivation of the Cramér-Rao lower bound (CRB) on the accuracy of FMCW ranging, and derivation of the ML estimator, whose performance approaches this bound at high photon flux. The estimation algorithm was developed, and its optimality properties were shown in simulation. Experimental data show that it performs better than the conventional estimation algorithms used. The demonstrated improvement is a factor of 1.414 over frequency-domainbased estimation. If the target interrogating photons and the local reference field photons are costed equally, the optimal allocation of photons between these two arms is to have them equally distributed. This is different than the state of the art, in which the local field is stronger than the target return. The optimal processing of the photocurrent processes at the outputs of the two detectors is to perform log-matched filtering followed by a summation and peak detection. This implies that neither difference detection, nor Fourier-domain peak detection, which are the staples of the state-of-the-art systems, is optimal when a weak local oscillator is employed.

  12. Experimental study of acoustic agglomeration and fragmentation on coal-fired ash

    NASA Astrophysics Data System (ADS)

    Shen, Guoqing; Huang, Xiaoyu; He, Chunlong; Zhang, Shiping; An, Liansuo; Wang, Liang; Chen, Yanqiao; Li, Yongsheng

    2018-02-01

    As the major part of air pollution, inhalable particles, especially fine particles are doing great harm to human body due to smaller particle size and absorption of hazardous components. However, the removal efficiency of current particles filtering devices is low. Acoustic agglomeration is considered as a very effective pretreatment technique for removing particles. Fine particles collide, agglomerate and grow up in the sound field and the fine particles can be removed by conventional particles devices easily. In this paper, the agglomeration and fragmentation of 3 different kinds of particles with different size distributions are studied experimentally in the sound field. It is found that there exists an optimal frequency at 1200 Hz for different particles. The agglomeration efficiency of inhalable particles increases with SPL increasing for the unimodal particles with particle diameter less than 10 μm. For the bimodal particles, the optimal SPLs are 115 and 120 dB with the agglomeration efficiencies of 25% and 55%. A considerable effectiveness of agglomeration could only be obtained in a narrow SPL range and it decreases significantly over the range for the particles fragmentation.

  13. Optimization of flow cytometric detection and cell sorting of transgenic Plasmodium parasites using interchangeable optical filters

    PubMed Central

    2012-01-01

    Background Malaria remains a major cause of morbidity and mortality worldwide. Flow cytometry-based assays that take advantage of fluorescent protein (FP)-expressing malaria parasites have proven to be valuable tools for quantification and sorting of specific subpopulations of parasite-infected red blood cells. However, identification of rare subpopulations of parasites using green fluorescent protein (GFP) labelling is complicated by autofluorescence (AF) of red blood cells and low signal from transgenic parasites. It has been suggested that cell sorting yield could be improved by using filters that precisely match the emission spectrum of GFP. Methods Detection of transgenic Plasmodium falciparum parasites expressing either tdTomato or GFP was performed using a flow cytometer with interchangeable optical filters. Parasitaemia was evaluated using different optical filters and, after optimization of optics, the GFP-expressing parasites were sorted and analysed by microscopy after cytospin preparation and by imaging cytometry. Results A new approach to evaluate filter performance in flow cytometry using two-dimensional dot blot was developed. By selecting optical filters with narrow bandpass (BP) and maximum position of filter emission close to GFP maximum emission in the FL1 channel (510/20, 512/20 and 517/20; dichroics 502LP and 466LP), AF was markedly decreased and signal-background improve dramatically. Sorting of GFP-expressing parasite populations in infected red blood cells at 90 or 95% purity with these filters resulted in 50-150% increased yield when compared to the standard filter set-up. The purity of the sorted population was confirmed using imaging cytometry and microscopy of cytospin preparations of sorted red blood cells infected with transgenic malaria parasites. Discussion Filter optimization is particularly important for applications where the FP signal and percentage of positive events are relatively low, such as analysis of parasite-infected samples with in the intention of gene-expression profiling and analysis. The approach outlined here results in substantially improved yield of GFP-expressing parasites, and requires decreased sorting time in comparison to standard methods. It is anticipated that this protocol will be useful for a wide range of applications involving rare events. PMID:22950515

  14. Fuzzy Logic-Based Filter for Removing Additive and Impulsive Noise from Color Images

    NASA Astrophysics Data System (ADS)

    Zhu, Yuhong; Li, Hongyang; Jiang, Huageng

    2017-12-01

    This paper presents an efficient filter method based on fuzzy logics for adaptively removing additive and impulsive noise from color images. The proposed filter comprises two parts including noise detection and noise removal filtering. In the detection part, the fuzzy peer group concept is applied to determine what type of noise is added to each pixel of the corrupted image. In the filter part, the impulse noise is deducted by the vector median filter in the CIELAB color space and an optimal fuzzy filter is introduced to reduce the Gaussian noise, while they can work together to remove the mixed Gaussian-impulse noise from color images. Experimental results on several color images proves the efficacy of the proposed fuzzy filter.

  15. Digital filtering implementations for the detection of broad spectral features by direct analysis of passive Fourier transform infrared interferograms.

    PubMed

    Tarumi, Toshiyasu; Small, Gary W; Combs, Roger J; Kroutil, Robert T

    2004-04-01

    Finite impulse response (FIR) filters and finite impulse response matrix (FIRM) filters are evaluated for use in the detection of volatile organic compounds with wide spectral bands by direct analysis of interferogram data obtained from passive Fourier transform infrared (FT-IR) measurements. Short segments of filtered interferogram points are classified by support vector machines (SVMs) to implement the automated detection of heated plumes of the target analyte, ethanol. The interferograms employed in this study were acquired with a downward-looking passive FT-IR spectrometer mounted on a fixed-wing aircraft. Classifiers are trained with data collected on the ground and subsequently used for the airborne detection. The success of the automated detection depends on the effective removal of background contributions from the interferogram segments. Removing the background signature is complicated when the analyte spectral bands are broad because there is significant overlap between the interferogram representations of the analyte and background. Methods to implement the FIR and FIRM filters while excluding background contributions are explored in this work. When properly optimized, both filtering procedures provide satisfactory classification results for the airborne data. Missed detection rates of 8% or smaller for ethanol and false positive rates of at most 0.8% are realized. The optimization of filter design parameters, the starting interferogram point for filtering, and the length of the interferogram segments used in the pattern recognition is discussed.

  16. Characterization and improvement of field CD uniformity for implementation of 0.15-μm technology device using KrF stepper

    NASA Astrophysics Data System (ADS)

    Hyun, Yoon-Suk; Kim, Dong-Joo; Koh, Cha-Won; Park, Sung-Nam; Kwon, Won-Taik

    2003-06-01

    xAs the design rule of semiconductor device shrinks, the field CD uniformity gets more important. For mass production of 0.15 μm technology device using KrF stepper having 0.63NA, the improvement of field CD uniformity was one of key issues because field CD uniformity is directly related to device characteristics in some layers. We have experienced steppers that show poor illumination uniformity. With those steppers there was large CD difference of about 10nm between field center and field edges as shown in Figure 1. Although we were using verified reticles, we could not get an acceptable CD uniformity in a field with those steppers. The Field CD uniformity is dominantly dependent of the illumination uniformity of stepper and mask quality. With these optimization, we could control DICD difference between field center and edge to be less than 5nm. In this paper, we characterized the dependency of field CD uniformity according to illumination systems with stepper and scanner, annular illumination uniformity at various stigma, mask CD uniformity and the several types of novel gray filter specifically developed.

  17. Optimization of nonlinear, non-Gaussian Bayesian filtering for diagnosis and prognosis of monotonic degradation processes

    NASA Astrophysics Data System (ADS)

    Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.

    2018-05-01

    The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.

  18. Dimension reduction: additional benefit of an optimal filter for independent component analysis to extract event-related potentials.

    PubMed

    Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani

    2011-09-30

    The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE PAGES

    Angland, P.; Haberberger, D.; Ivancic, S. T.; ...

    2017-10-30

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  20. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angland, P.; Haberberger, D.; Ivancic, S. T.

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  1. Social and Demographic Factors Influencing Inferior Vena Cava Filter Retrieval at a Single Institution in the United States.

    PubMed

    Smith, S Christian; Shanks, Candace; Guy, Gregory; Yang, Xiangyu; Dowell, Joshua D

    2015-10-01

    Retrievable inferior vena cava filters (IVCFs) are associated with long-term adverse events that have increased interest in improving filter retrieval rates. Determining the influential patient social and demographic factors affecting IVCF retrieval is important to personalize patient management strategies and attain optimal patient care. Seven-hundred and sixty-two patients were retrospectively studied who had a filter placed at our institution between January 2011 and November 2013. Age, gender, race, cancer history, distance to residence from retrieval institution, and insurance status were identified for each patient, and those receiving retrievable IVCFs were further evaluated for retrieval rate and time to retrieval. Of the 762 filters placed, 133 were permanent filters. Of the 629 retrievable filters placed, 406 met the inclusion criteria and were eligible for retrieval. Results revealed patients with Medicare were less likely to have their filters retrieved (p = 0.031). Older age was also associated with a lower likelihood of retrieval (p < 0.001) as was living further from the medical center (p = 0.027). Patients who were white and had Medicare were more likely than similarly insured black patients to have their filters retrieved (p = 0.024). The retrieval rate of IVCFs was most influenced by insurance status, distance from the medical center, and age. Race was statistically significant only when combined with insurance status. The results of this study suggest that these patient groups may need closer follow-up in order to obtain optimal IVCF retrieval rates.

  2. Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dong Sik; Lee, Sanggyun

    2013-06-15

    Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less

  3. Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.

    PubMed

    Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C

    2015-03-01

    The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. [Comparison of tone burst evoked auditory brainstem responses with different filter settings for referral infants after hearing screening].

    PubMed

    Diao, Wen-wen; Ni, Dao-feng; Li, Feng-rong; Shang, Ying-ying

    2011-03-01

    Auditory brainstem responses (ABR) evoked by tone burst is an important method of hearing assessment in referral infants after hearing screening. The present study was to compare the thresholds of tone burst ABR with filter settings of 30 - 1500 Hz and 30 - 3000 Hz at each frequency, figure out the characteristics of ABR thresholds with the two filter settings and the effect of the waveform judgement, so as to select a more optimal frequency specific ABR test parameter. Thresholds with filter settings of 30 - 1500 Hz and 30 - 3000 Hz in children aged 2 - 33 months were recorded by click, tone burst ABR. A total of 18 patients (8 male/10 female), 22 ears were included. The thresholds of tone burst ABR with filter settings of 30 - 3000 Hz were higher than that with filter settings of 30 - 1500 Hz. Significant difference was detected for that at 0.5 kHz and 2.0 kHz (t values were 2.238 and 2.217, P < 0.05), no significant difference between the two filter settings was detected at the rest frequencies tone evoked ABR thresholds. The waveform of ABR with filter settings of 30 - 1500 Hz was smoother than that with filter settings of 30 - 3000 Hz at the same stimulus intensity. Response curve of the latter appeared jagged small interfering wave. The filter setting of 30 - 1500 Hz may be a more optimal parameter of frequency specific ABR to improve the accuracy of frequency specificity ABR for infants' hearing assessment.

  5. Adaptive Low Dissipative High Order Filter Methods for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2004-01-01

    Adaptive low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field [divergence of B] numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.

  6. Real time tracking by LOPF algorithm with mixture model

    NASA Astrophysics Data System (ADS)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  7. Performance characterization of a pressure-tuned wide-angle Michelson interferometric spectral filter for high spectral resolution lidar

    NASA Astrophysics Data System (ADS)

    Seaman, Shane T.; Cook, Anthony L.; Scola, Salvatore J.; Hostetler, Chris A.; Miller, Ian; Welch, Wayne

    2015-09-01

    High Spectral Resolution Lidar (HSRL) is typically realized using an absorption filter to separate molecular returns from particulate returns. NASA Langley Research Center (LaRC) has designed and built a Pressure-Tuned Wide-Angle Michelson Interferometer (PTWAMI) as an alternate means to separate the two types of atmospheric returns. While absorption filters only work at certain wavelengths and suffer from low photon efficiency due to light absorption, an interferometric spectral filter can be designed for any wavelength and transmits nearly all incident photons. The interferometers developed at LaRC employ an air spacer in one arm, and a solid glass spacer in the other. Field widening is achieved by specific design and selection of the lengths and refractive indices of these two arms. The principal challenge in using such an interferometer as a spectral filter for HSRL aboard aircraft is that variations in glass temperature and air pressure cause changes in the interferometer's optical path difference. Therefore, a tuning mechanism is needed to actively accommodate for these changes. The pressure-tuning mechanism employed here relies on changing the pressure in an enclosed, air-filled arm of the interferometer to change the arm's optical path length. However, tuning using pressure will not adjust for tilt, mirror warpage, or thermally induced wavefront error, so the structural, thermal, and optical behavior of the device must be well understood and optimized in the design and manufacturing process. The PTWAMI has been characterized for particulate transmission ratio, wavefront error, and tilt, and shows acceptable performance for use in an HSRL instrument.

  8. Rapid Pinhole Growth in the F160BW Filter

    NASA Astrophysics Data System (ADS)

    Biretta, J.; Verner, E.

    2009-03-01

    The WFPC2 Filter F160BW, also known as WOOD's filter, was designed to transmit UV emission around 150nm and strongly block all other wavelengths. The filter has a unique construction where a thin film of sodium metal serves as the spectral element. However, sodium is a highly unstable and reactive metal, which makes the filter susceptible to changes over time. Herein we report a rapidly growing pinhole in the filter located in the field of view of the WF2 CCD. Observers requiring a high rejection of out-of-band light (i.e. red leak) should take note of this feature, and avoid the affected region in the field-of-view.

  9. Electron cyclotron resonance heating by magnetic filter field in a negative hydrogen ion source.

    PubMed

    Kim, June Young; Cho, Won-Hwi; Dang, Jeong-Jeung; Chung, Kyoung-Jae; Hwang, Y S

    2016-02-01

    The influence of magnetic filter field on plasma properties in the heating region has been investigated in a planar-type inductively coupled radio-frequency (RF) H(-) ion source. Besides filtering high energy electrons near the extraction region, the magnetic filter field is clearly observed to increase the electron temperature in the heating region at low pressure discharge. With increasing the operating pressure, enhancement of electron temperature in the heating region is reduced. The possibility of electron cyclotron resonance (ECR) heating in the heating region due to stray magnetic field generated by a filter magnet located at the extraction region is examined. It is found that ECR heating by RF wave field in the discharge region, where the strength of an axial magnetic field is approximately ∼4.8 G, can effectively heat low energy electrons. Depletion of low energy electrons in the electron energy distribution function measured at the heating region supports the occurrence of ECR heating. The present study suggests that addition of axial magnetic field as small as several G by an external electromagnet or permanent magnets can greatly increase the generation of highly ro-vibrationally excited hydrogen molecules in the heating region, thus improving the performance of H(-) ion generation in volume-produced negative hydrogen ion sources.

  10. Stochastic optimal control of non-stationary response of a single-degree-of-freedom vehicle model

    NASA Astrophysics Data System (ADS)

    Narayanan, S.; Raju, G. V.

    1990-09-01

    An active suspension system to control the non-stationary response of a single-degree-of-freedom (sdf) vehicle model with variable velocity traverse over a rough road is investigated. The suspension is optimized with respect to ride comfort and road holding, using stochastic optimal control theory. The ground excitation is modelled as a spatial homogeneous random process, being the output of a linear shaping filter to white noise. The effect of the rolling contact of the tyre is considered by an additional filter in cascade. The non-stationary response with active suspension is compared with that of a passive system.

  11. Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method.

    PubMed

    Saito, Masatoshi

    2009-08-01

    Dual-energy computed tomography (DECT) has the potential for measuring electron density distribution in a human body to predict the range of particle beams for treatment planning in proton or heavy-ion radiotherapy. However, thus far, a practical dual-energy method that can be used to precisely determine electron density for treatment planning in particle radiotherapy has not been developed. In this article, another DECT technique involving a balanced filter method using a conventional x-ray tube is described. For the spectral optimization of DECT using balanced filters, the author calculates beam-hardening error and air kerma required to achieve a desired noise level in electron density and effective atomic number images of a cylindrical water phantom with 50 cm diameter. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimized parameters were applied to cases with different phantom diameters ranging from 5 to 50 cm for the calculations. The author predicts that the optimal combination of tube voltages would be 80 and 140 kV with Tb/Hf and Bi/Mo filter pairs for the 50-cm-diameter water phantom. When a single phantom calibration at a diameter of 25 cm was employed to cover all phantom sizes, maximum absolute beam-hardening errors were 0.3% and 0.03% for electron density and effective atomic number, respectively, over a range of diameters of the water phantom. The beam-hardening errors were 1/10 or less as compared to those obtained by conventional DECT, although the dose was twice that of the conventional DECT case. From the viewpoint of beam hardening and the tube-loading efficiency, the present DECT using balanced filters would be significantly more effective in measuring the electron density than the conventional DECT. Nevertheless, further developments of low-exposure imaging technology should be necessary as well as x-ray tubes with higher outputs to apply DECT coupled with the balanced filter method for clinical use.

  12. Workplace field testing of the pressure drop of particulate respirators using welding fumes.

    PubMed

    Cho, Hyun-Woo; Yoon, Chung-Sik

    2012-10-01

    In a previous study, we concluded that respirator testing with a sodium chloride aerosol gave a conservative estimate of filter penetration for welding fume aerosols. A rapid increase in the pressure drop (PD) of some respirators was observed as fumes accumulated on the filters. The present study evaluated particulate respirator PD based on workplace field tests. A field PD tester was designed and validated using the TSI 8130 Automatic Filter Tester, designed in compliance with National Institute for Occupational and Safety and Health regulation 42 CFR part 84. Three models (two replaceable dual-type filters and one replaceable single-type filter) were evaluated against CO(2) gas arc welding on mild steel in confined booths in the workplace. Field tests were performed under four airborne concentrations (27.5, 15.4, 7.9, and 2.1 mg m(-3)). The mass concentration was measured by the gravimetric method, and number concentration was monitored using P-Trak (Model 8525, TSI, USA). Additionally, photos and scanning electron microscopy-energy dispersive X-ray spectroscopy were used to visualize and analyze the composition of welding fumes trapped in the filters. The field PD tester showed no significant difference compared with the TSI tester. There was no significant difference in the initial PD between laboratory and field results. The PD increased as a function of fume load on the respirator filters for all tested models. The increasing PD trend differed by models, and PD increased rapidly at high concentrations because greater amount of fumes accumulated on the filters in a given time. The increase in PD as a function of fume load on the filters showed a similar pattern as fume load varied for a particular model, but different patterns were observed for different models. Images and elemental analyses of fumes trapped on the respirator filters showed that most welding fumes were trapped within the first layer, outer web cover, and second layer, in order, while no fumes were observed beneath the fourth layer of the tested respirators. The current findings contribute substantially to our understanding of respirator PD in the presence of welding fumes.

  13. Maximum a posteriori resampling of noisy, spatially correlated data

    NASA Astrophysics Data System (ADS)

    Goff, John A.; Jenkins, Chris; Calder, Brian

    2006-08-01

    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S; Fan, Q; Lei, Y

    Purpose: In-Water-Output-Ratio (IWOR) plays a significant role in linac-based radiotherapy treatment planning, linking MUs to delivered radiation dose. For an open rectangular field, IWOR depends on both its width and length, and changes rapidly when one of them becomes small. In this study, a universal functional form is proposed to fit the open field IWOR tables in Varian TrueBeam representative datasets for all photon energies. Methods: A novel Generalized Mean formula is first used to estimate the Equivalent Square (ES) for a rectangular field. The formula’s weighting factor and power index are determined by collapsing all data points as muchmore » as possible onto a single curve in IWOR vs. ES plot. The result is then fitted with a novel universal function IWOR=1+b*Log(ES/10cm)/(ES/10cm)^c via a least-square procedure to determine the optimal values for parameters b and c. The maximum relative residual error in IWOR over the entire two-dimensional measurement table with field sizes between 3cm and 40cm is used to evaluate the quality of fit for the function. Results: The two-step fitting strategy works very well in determining the optimal parameter values for open field IWOR of each photon energies in the Varian data-set. Relative residual error ≤0.71% is achieved for all photon energies (including Flattening-Filter-Free modes) with field sizes between 3cm and 40cm. The optimal parameter values change smoothly with regular photon beam quality. Conclusion: The universal functional form fits the Varian TrueBeam open field IWOR measurement tables accurately with small relative residual errors for all photon energies. Therefore, it can be an excellent choice to represent IWOR in absolute dose and MU calculations. The functional form can also be used as a QA/commissioning tool to verify the measured data quality and consistency by checking the IWOR data behavior against the function for new photon energies with arbitrary beam quality.« less

  15. Development and Validation of Search Filters to Identify Articles on Family Medicine in Online Medical Databases.

    PubMed

    Pols, David H J; Bramer, Wichor M; Bindels, Patrick J E; van de Laar, Floris A; Bohnen, Arthur M

    2015-01-01

    Physicians and researchers in the field of family medicine often need to find relevant articles in online medical databases for a variety of reasons. Because a search filter may help improve the efficiency and quality of such searches, we aimed to develop and validate search filters to identify research studies of relevance to family medicine. Using a new and objective method for search filter development, we developed and validated 2 search filters for family medicine. The sensitive filter had a sensitivity of 96.8% and a specificity of 74.9%. The specific filter had a specificity of 97.4% and a sensitivity of 90.3%. Our new filters should aid literature searches in the family medicine field. The sensitive filter may help researchers conducting systematic reviews, whereas the specific filter may help family physicians find answers to clinical questions at the point of care when time is limited. © 2015 Annals of Family Medicine, Inc.

  16. Dual-energy approach to contrast-enhanced mammography using the balanced filter method: spectral optimization and preliminary phantom measurement.

    PubMed

    Saito, Masatoshi

    2007-11-01

    Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity-in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:T1 scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components-acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.

  17. The filter and calibration wheel for the ATHENA wide field imager

    NASA Astrophysics Data System (ADS)

    Rataj, M.; Polak, S.; Palgan, T.; Kamisiński, T.; Pilch, A.; Eder, J.; Meidinger, N.; Plattner, M.; Barbera, M.; Parodi, G.; D'Anca, Fabio

    2016-07-01

    The planned filter and calibration wheel for the Wide Field Imager (WFI) instrument on Athena is presented. With four selectable positions it provides the necessary functions, in particular an UV/VIS blocking filter for the WFI detectors and a calibration source. Challenges for the filter wheel design are the large volume and mass of the subsystem, the implementation of a robust mechanism and the protection of the ultra-thin filter with an area of 160 mm square. This paper describes performed trade-offs based on simulation results and describes the baseline design in detail. Reliable solutions are envisaged for the conceptual design of the filter and calibration wheel. Four different variant with different position of the filter are presented. Risk mitigation and the compliance to design requirements are demonstrated.

  18. Spatial filters for high-peak-power multistage laser amplifiers.

    PubMed

    Potemkin, A K; Barmashova, T V; Kirsanov, A V; Martyanov, M A; Khazanov, E A; Shaykin, A A

    2007-07-10

    We describe spatial filters used in a Nd:glass laser with an output pulse energy up to 300 J and a pulse duration of 1 ns. This laser is designed for pumping of a chirped-pulse optical parametric amplifier. We present data required to choose the shape and diameter of a spatial filter lens, taking into account aberrations caused by spherical surfaces. Calculation of the optimal pinhole diameter is presented. Design features of the spatial filters and the procedure of their alignment are discussed in detail.

  19. Improvement of the energy resolution via an optimized digital signal processing in GERDA Phase I

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Barros, N.; Baudis, L.; Bauer, C.; Becerici-Schmidt, N.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Budjáš, D.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Vacri, A. di; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Fedorova, O.; Freund, K.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hegai, A.; Heisel, M.; Hemmer, S.; Heusser, G.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Klimenko, A.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, ********************M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schütz, A.-K.; Schulz, O.; Schwingenheuer, B.; Selivanenko, O.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Stepaniuk, M.; Ur, C. A.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Walter, M.; Wegmann, A.; Wester, T.; Wilsenach, H.; Wojcik, M.; Yanovich, E.; Zavarise, P.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.

    2015-06-01

    An optimized digital shaping filter has been developed for the Gerda experiment which searches for neutrinoless double beta decay in Ge. The Gerda Phase I energy calibration data have been reprocessed and an average improvement of 0.3 keV in energy resolution (FWHM) corresponding to 10 % at the value for decay in Ge is obtained. This is possible thanks to the enhanced low-frequency noise rejection of this Zero Area Cusp (ZAC) signal shaping filter.

  20. Thermal neutron filter design for the neutron radiography facility at the LVR-15 reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soltes, Jaroslav; Faculty of Nuclear Sciences and Physical Engineering, CTU in Prague,; Viererbl, Ladislav

    2015-07-01

    In 2011 a decision was made to build a neutron radiography facility at one of the unused horizontal channels of the LVR-15 research reactor in Rez, Czech Republic. One of the key conditions for operating an effective radiography facility is the delivery of a high intensity, homogeneous and collimated thermal neutron beam at the sample location. Additionally the intensity of fast neutrons has to be kept as low as possible as the fast neutrons may damage the detectors used for neutron imaging. As the spectrum in the empty horizontal channel roughly copies the spectrum in the reactor core, which hasmore » a high ratio of fast neutrons, neutron filter components have to be installed inside the channel in order to achieve desired beam parameters. As the channel design does not allow the instalment of complex filters and collimators, an optimal solution represent neutron filters made of large single-crystal ingots of proper material composition. Single-crystal silicon was chosen as a favorable filter material for its wide availability in sufficient dimensions. Besides its ability to reasonably lower the ratio of fast neutrons while still keeping high intensities of thermal neutrons, due to its large dimensions, it suits as a shielding against gamma radiation from the reactor core. For designing the necessary filter dimensions the Monte-Carlo MCNP transport code was used. As the code does not provide neutron cross-section libraries for thermal neutron transport through single-crystalline silicon, these had to be created by approximating the theory of thermal neutron scattering and modifying the original cross-section data which are provided with the code. Carrying out a series of calculations the filter thickness of 1 m proved good for gaining a beam with desired parameters and a low gamma background. After mounting the filter inside the channel several measurements of the neutron field were realized at the beam exit. The results have justified the expected calculated values. After the successful filter installing and a series of measurements, first test neutron radiography attempts with test samples could been carried out. (authors)« less

  1. Stacked, filtered multi-channel X-ray diode array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacNeil, Lawrence; Dutra, Eric; Raphaelian, Mark

    2015-08-01

    There are many types of X-ray diodes used for X-ray flux or spectroscopic measurements and for estimating the spectral shape of the VUV to soft X-ray spectrum. However, a need exists for a low-cost, robust X-ray diode to use for experiments in hostile environments on multiple platforms, and for experiments that utilize forces that may destroy the diode(s). Since the typical proposed use required a small size with a minimal single line-of-sight, a parallel array could not be used. So, a stacked, filtered multi-channel X-ray diode array was developed, called the MiniXRD. To achieve significant cost savings while maintaining robustnessmore » and ease of field setup, repair, and replacement, we designed the system to be modular. The filters were manufactured in-house and cover the range from 450 eV to 5000 eV. To achieve the line-of-sight accuracy needed, we developed mounts and laser alignment techniques. We modeled and tested elements of the diode design at NSTec Livermore Operations (NSTec / LO) to determine temporal response and dynamic range, leading to diode shape and circuitry changes to optimize impedance and charge storage. The authors fielded individual and stacked systems at several national facilities as ancillary "ride-along" diagnostics to test and improve the design usability. This paper presents the MiniXRD system performance, which supports consideration as a viable low-costalternative for multiple-channel low-energy X-ray measurements. This diode array is currently at Technical Readiness Level (TRL) 6.« less

  2. A wideband UHF high-temperature superconducting filter system with a fractional bandwidth over 108%

    NASA Astrophysics Data System (ADS)

    Huang, Haibo; Wu, Yun; Wang, Jia; Bian, Yongbo; Wang, Xu; Li, Guoqiang; Zhang, Xueqiang; Li, Chunguang; Sun, Liang; He, Yusheng

    2018-07-01

    A High-temperature superconducting (HTS) bandpass filter system containing a lowpass filter, a highpass filter and an LNA has been fabricated to meet the demands of wideband wireless signal receiving system. The filter system has an ultimate fractional bandwidth over 108% with the passband from 820 MHz to 2750 MHz. Besides, the filter system showed good frequency selectivity and out-of-band rejection. The 40 dB to 3 dB rectangle coefficient of our filter system is 1.4, which is better than that of an 8-pole Chebyshev filter, and the out-of-band rejection is better than 40 dB. Through systematical optimization, a return loss of better than 9.8 dB was received in the filter system. This system also showed advantages in design and fabrication precision.

  3. Distributed Event-Based Set-Membership Filtering for a Class of Nonlinear Systems With Sensor Saturations Over Sensor Networks.

    PubMed

    Ma, Lifeng; Wang, Zidong; Lam, Hak-Keung; Kyriakoulis, Nikos

    2017-11-01

    In this paper, the distributed set-membership filtering problem is investigated for a class of discrete time-varying system with an event-based communication mechanism over sensor networks. The system under consideration is subject to sector-bounded nonlinearity, unknown but bounded noises and sensor saturations. Each intelligent sensing node transmits the data to its neighbors only when certain triggering condition is violated. By means of a set of recursive matrix inequalities, sufficient conditions are derived for the existence of the desired distributed event-based filter which is capable of confining the system state in certain ellipsoidal regions centered at the estimates. Within the established theoretical framework, two additional optimization problems are formulated: one is to seek the minimal ellipsoids (in the sense of matrix trace) for the best filtering performance, and the other is to maximize the triggering threshold so as to reduce the triggering frequency with satisfactory filtering performance. A numerically attractive chaos algorithm is employed to solve the optimization problems. Finally, an illustrative example is presented to demonstrate the effectiveness and applicability of the proposed algorithm.

  4. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  5. Identifying the Source of Large-Scale Atmospheric Variability in Jupiter

    NASA Astrophysics Data System (ADS)

    Orton, Glenn

    2011-01-01

    We propose to use the unique mid-infrared filtered imaging and spectroscopic capabilities of the Subaru COMICS instrument to determine the mechanisms associated with recent unusual rapid albedo and color transformations of several of Jupiter's bands, particularly its South Equatorial Belt (SEB), as a means to understand the coupling between its dynamics and chemistry. These observations will characterize the temperature, degree of cloud cover, and distribution of minor gases that serve as indirect tracers of vertical motions in regions that will be undergoing unusual large-scale changes in dynamics and chemistry: the SEB, as well as regions near the equator and Jupiter's North Temperate Belt. COMICS is ideal for this investigation because of its efficiency in doing both imaging and spectroscopy, its 24.5-mum filter that is unique to 8-meter-class telescopes, its wide field of view that allows imaging of nearly all of Jupiter's disk, coupled with a high diffraction-limited angular resolution and optimal mid-infrared atmospheric transparency.

  6. Possibilities of Bragg filtering structures based on subwavelength grating guiding mechanism (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kwiecien, Pavel; Litvik, Ján.; Richter, Ivan; Ctyroký, Jirí; Cheben, Pavel

    2017-05-01

    Silicon-on-insulator (SOI), as the most promising platform, for advanced photonic integrated structures, employs a high refractive index contrast between the silicon "core" and surrounding media. One of the recent new ideas within this field is based on the alternative formation of the subwavelength sized (quasi)periodic structures, manifesting as an effective medium with respect to propagating light. Such structures relay on Bloch wave propagation concept, in contrast to standard index guiding mechanism. Soon after the invention of such subwavelength grating (SWG) waveguides, the scientists concentrated on various functional elements such as couplers, crossings, mode transformers, convertors, MMI couplers, polarization converters, resonators, Bragg filters, and others. Our contribution is devoted to a detailed numerical analysis and design considerations of Bragg filtering structures based on SWG idea. Based on our previous studies where we have shown impossibility of application of various 2 and "2.5" dimensional methods for the proper numerical analysis, here we effectively use two independent but similar in-house approaches based on 3D Fourier modal methods, namely aperiodic rigorous coupled wave analysis (aRCWA) and bidirectional expansion and propagation method based on Fourier series (BEX) tools. As it was recently demonstrated, SWG Bragg filters are feasible. Based on this idea, we propose, simulate, and optimize spectral characteristics of such filters. In particular, we have investigated several possibilities of modifications of original SWG waveguides towards the Bragg filtering, including firstly - simple single-segment changes in position, thickness, and width, and secondly - several types of Si inclusions, in terms of perturbed width and thickness (and their combinations). The leading idea was to obtain required (e.g. sufficiently narrow) spectral characteristic while keeping the minimum size of Si features large enough. We have found that the second approach with the single element perturbations can provide promising designs. Furthermore, even more complex filtering SWG structures can be considered.

  7. Genetically Engineered Microelectronic Infrared Filters

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    1998-01-01

    A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.

  8. Optimization of a dual-energy contrast-enhanced technique for a photon-counting digital breast tomosynthesis system: I. A theoretical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carton, Ann-Katherine; Ullberg, Christer; Lindman, Karin

    2010-11-15

    Purpose: Dual-energy (DE) iodine contrast-enhanced x-ray imaging of the breast has been shown to identify cancers that would otherwise be mammographically occult. In this article, theoretical modeling was performed to obtain optimally enhanced iodine images for a photon-counting digital breast tomosynthesis (DBT) system using a DE acquisition technique. Methods: In the system examined, the breast is scanned with a multislit prepatient collimator aligned with a multidetector camera. Each detector collects a projection image at a unique angle during the scan. Low-energy (LE) and high-energy (HE) projection images are acquired simultaneously in a single scan by covering alternate collimator slits withmore » Sn and Cu filters, respectively. Sn filters ranging from 0.08 to 0.22 mm thickness and Cu filters from 0.11 to 0.27 mm thickness were investigated. A tube voltage of 49 kV was selected. Tomographic images, hereafter referred to as DBT images, were reconstructed using a shift-and-add algorithm. Iodine-enhanced DBT images were acquired by performing a weighted logarithmic subtraction of the HE and LE DBT images. The DE technique was evaluated for 20-80 mm thick breasts. Weighting factors, w{sub t}, that optimally cancel breast tissue were computed. Signal-difference-to-noise ratios (SDNRs) between iodine-enhanced and nonenhanced breast tissue normalized to the square root of the mean glandular dose (MGD) were computed as a function of the fraction of the MGD allocated to the HE images. Peak SDNR/{radical}(MGD) and optimal dose allocations were identified. SDNR/{radical}(MGD) and dose allocations were computed for several practical feasible system configurations (i.e., determined by the number of collimator slits covered by Sn and Cu). A practical system configuration and Sn-Cu filter pair that accounts for the trade-off between SDNR, tube-output, and MGD were selected. Results: w{sub t} depends on the Sn-Cu filter combination used, as well as on the breast thickness; to optimally cancel 0% with 50% glandular breast tissue, w{sub t} values were found to range from 0.46 to 0.72 for all breast thicknesses and Sn-Cu filter pairs studied. The optimal w{sub t} values needed to cancel all possible breast tissue glandularites vary by less than 1% for 20 mm thick breasts and 18% for 80 mm breasts. The system configuration where one collimator slit covered by Sn is alternated with two collimator slits covered by Cu delivers SDNR/{radical}(MGD) nearest to the peak value. A reasonable compromise is a 0.16 mm Sn-0.23 mm Cu filter pair, resulting in SDNR values between 1.64 and 0.61 and MGD between 0.70 and 0.53 mGy for 20-80 mm thick breasts at the maximum tube current. Conclusions: A DE acquisition technique for a photon-counting DBT imaging system has been developed and optimized.« less

  9. Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Sen, A. K.; Longman, R. W.

    2006-01-01

    An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.

  10. Adaptive Optimal Stochastic State Feedback Control of Resistive Wall Modes in Tokamaks

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Sen, A. K.; Longman, R. W.

    2007-06-01

    An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least square method with exponential forgetting factor and covariance resetting is used to identify the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.

  11. Assessing mass change trends in GRACE models

    NASA Astrophysics Data System (ADS)

    Siemes, C.; Liu, X.; Ditmar, P.; Revtova, E.; Slobbe, C.; Klees, R.; Zhao, Q.

    2009-04-01

    The DEOS Mass Transport model, release 1 (DMT-1), has been recently presented to the scientific community. The model is based on GRACE data and consists of sets of spherical harmonic coefficients to degree 120, which are estimated once per month. Currently, the DMT-1 model covers the time span from Feb. 2003 to Dec. 2006. The high spatial resolution of the model could be achieved by applying a statistically optimal Wiener-type filter, which is superior to standard filtering techniques. The optimal Wiener-type filter is a regularization-type filter which makes full use of the variance/covariance matrices of the sets of spherical harmonic coefficients. It can be shown that applying this filter is equivalent to introducing an additional set of observations: Each set of spherical harmonic coefficients is assumed to be zero. The variance/covariance matrix of this information is chosen according to the signal contained within the sets of spherical harmonic coefficients, expressed in terms of equivalent water layer thickness in the spatial domain, with respect to its variations in time. It will be demonstrated that DMT-1 provides a much better localization and more realistic amplitudes than alternative filtered models. In particular, we will consider a lower maximum degree of the spherical harmonic expansion (e.g. 70), as well as standard filters like an isotropic Gaussian filter. For the sake of a fair comparison, we will use the same GRACE observations as well as the same method for the inversion of the observations to obtain the alternative filtered models. For the inversion method, we will choose the three-point range combination approach. Thus, we will compare four different models: (1) GRACE solution with maximum degree 120, filtered by optimal Wiener-type filter (the DMT-1 model) (2) GRACE solution with maximum degree 120, filtered by standard filter (3) GRACE solution with maximum degree 70, filtered by optimal Wiener-type filter (4) GRACE solution with maximum degree 70, filtered by standard filter Within the comparison, we will focus on the amplitude of long-term mass change signals with respect to spatial resolution. The challenge for the recovery of such signals from GRACE based solutions results from the fact that the solutions must be filtered and that filtering of always smoothes not only noise, but also to some extend signal. Since the observation density is much higher near the poles than at the equator, which is due to the orbits of the GRACE satellites, we expect that the magnitude of estimated mass change signals in polar areas is less underestimated than in equatorial areas. For this reason will investigate trends at locations in equatorial areas as well as trends at locations in polar areas. In particular, we will investigate Lake Victoria, Lake Malawi and Lake Tanganyika, which are all located in Eastern Africa, near to the equator. Furthermore, we will show trends of two locations at the South-East coast of Greenland, Abbot Ice-Shelf and Marie-Byrd-Land in Antarctica For validation, we use water level variations in Lake Victoria (69000 km2), Lake Malawi (29000 km2) and Lake Tanganyika (33000 km2) as ground truth. The water level, which is measured by satellite radar altimetry, decreases at a rate of approximately 47 cm in Lake Victoria, 42 cm in Lake Malawi and 30 cm in Lake Tanganyika over the period from Feb. 2003 to Dec. 2006. Because all three lakes are located in tropical and subtropical clime, the mass change signal will consist of large seasonal variations in addition to the trend component we are interested in. However, also the amplitude of estimated seasonal variations can be used as an indicator of the quality of the models within the comparison. Since the lakes' areas are at the edge of the spatial resolution GRACE data can provide, they are a good example of the advantages of high-resolution mass change models like DMT-1.

  12. Development of and Clinical Experience with a Simple Device for Performing Intraoperative Fluorescein Fluorescence Cerebral Angiography: Technical Notes.

    PubMed

    Ichikawa, Tsuyoshi; Suzuki, Kyouichi; Watanabe, Yoichi; Sato, Taku; Sakuma, Jun; Saito, Kiyoshi

    2016-01-01

    To perform intraoperative fluorescence angiography (FAG) under a microscope without an integrated FAG function with reasonable cost and sufficient quality for evaluation, we made a small and easy to use device for fluorescein FAG (FAG filter). We investigated the practical use of this FAG filter during aneurysm surgery, revascularization surgery, and brain tumor surgery. The FAG filter consists of two types of filters: an excitatory filter and a barrier filter. The excitatory filter excludes all wavelengths except for blue light and the barrier filter passes long waves except for blue light. By adding this FAG filter to a microscope without an integrated FAG function, light from the microscope illuminating the surgical field becomes blue, which is blocked by the barrier filter. We put the FAG filter on the objective lens of the operating microscope correctly and fluorescein sodium was injected intravenously or intra-arterially. Fluorescence (green light) from vessels in the surgical field and the dyed tumor were clearly observed through the microscope and recorded by a memory device. This method was easy and could be performed in a short time (about 10 seconds). Blood flow of small vessels deep in the surgical field could be observed. Blood flow stagnation could be evaluated. However, images from this method were inferior to those obtained by currently commercially available microscopes with an integrated FAG function. In brain tumor surgery, a stained tumor on the brain surface could be observed using this method. FAG could be performed with a microscope without an integrated FAG function easily with only this FAG filter.

  13. Development of and Clinical Experience with a Simple Device for Performing Intraoperative Fluorescein Fluorescence Cerebral Angiography: Technical Notes

    PubMed Central

    ICHIKAWA, Tsuyoshi; SUZUKI, Kyouichi; WATANABE, Yoichi; SATO, Taku; SAKUMA, Jun; SAITO, Kiyoshi

    2016-01-01

    To perform intraoperative fluorescence angiography (FAG) under a microscope without an integrated FAG function with reasonable cost and sufficient quality for evaluation, we made a small and easy to use device for fluorescein FAG (FAG filter). We investigated the practical use of this FAG filter during aneurysm surgery, revascularization surgery, and brain tumor surgery. The FAG filter consists of two types of filters: an excitatory filter and a barrier filter. The excitatory filter excludes all wavelengths except for blue light and the barrier filter passes long waves except for blue light. By adding this FAG filter to a microscope without an integrated FAG function, light from the microscope illuminating the surgical field becomes blue, which is blocked by the barrier filter. We put the FAG filter on the objective lens of the operating microscope correctly and fluorescein sodium was injected intravenously or intra-arterially. Fluorescence (green light) from vessels in the surgical field and the dyed tumor were clearly observed through the microscope and recorded by a memory device. This method was easy and could be performed in a short time (about 10 seconds). Blood flow of small vessels deep in the surgical field could be observed. Blood flow stagnation could be evaluated. However, images from this method were inferior to those obtained by currently commercially available microscopes with an integrated FAG function. In brain tumor surgery, a stained tumor on the brain surface could be observed using this method. FAG could be performed with a microscope without an integrated FAG function easily with only this FAG filter. PMID:26597335

  14. Suitability of adsorption isotherms for predicting the retention capacity of active slag filters removing phosphorus from wastewater.

    PubMed

    Pratt, C; Shilton, A

    2009-01-01

    Active slag filters are an emerging technology for removing phosphorus (P) from wastewater. A number of researchers have suggested that adsorption isotherms are a useful tool for predicting P retention capacity. However, to date the appropriateness of using isotherms for slag filter design remains unverified due to the absence of benchmark data from a full-scale, field filter operated to exhaustion. This investigation compared the isotherm-predicted P retention capacity of a melter slag with the P adsorption capacity determined from a full-scale, melter slag filter which had reached exhaustion after five years of successfully removing P from waste stabilization pond effluent. Results from the standard laboratory batch test showed that P adsorption correlated more strongly with the Freundlich Isotherm (R(2)=0.97, P<0.01) than the Langmuir Isotherm, a similar finding to previous studies. However, at a P concentration of 10 mg/L, typical of domestic effluent, the Freundlich equation predicted a retention capacity of 0.014 gP/kg slag; markedly lower than the 1.23 gP/kg slag adsorbed by the field filter. Clearly, the result generated by the isotherm bears no resemblance to actual field capacity. Scanning electron microscopy analysis revealed porous, reactive secondary minerals on the slag granule surfaces from the field filter which were likely created by weathering. This slow weathering effect, which generates substantial new adsorption sites, is not accounted for by adsorption isotherms rendering them ineffective in slag filter design.

  15. Sensitivity Studies and Experimental Evaluation for Optimizing Transcurium Isotope Production

    DOE PAGES

    Hogle, Susan L.; Alexander, Charles W.; Burns, Jonathan D.; ...

    2017-03-01

    This work applies to recent initiatives at the Radiochemical Engineering Development Center at Oak Ridge National Laboratory to optimize the production of transcurium isotopes in the High Flux Isotope Reactor in such a way as to prolong the use of high quality heavy curium feedstock. By studying the sensitivity of fission and transmutation reaction rates to the neutron flux spectrum a means of increasing the fraction of (n,γ) reactions per neutron absorption is explored. Filter materials which preferentially absorb neutrons at energies considered detrimental to optimal transcurium production are identified and transmutation rates are examined with high energy resolution. Experimentalmore » capsules are irradiated employing filter materials and the resulting fission and activation products studied to validate the filtering methodology. Improvement is seen in the production efficiency of heavier curium isotopes in 244Cm and 245Cm targets, and potentially in production of 252Cf from mixed californium targets. Finally, further analysis is recommended to evaluate longer duration irradiations more representative of typical transcurium production.« less

  16. Optimization of a matched-filter receiver for frequency hopping code acquisition in jamming

    NASA Astrophysics Data System (ADS)

    Pawlowski, P. R.; Polydoros, A.

    A matched-filter receiver for frequency hopping (FH) code acquisition is optimized when either partial-band tone jamming or partial-band Gaussian noise jamming is present. The receiver is matched to a segment of the FH code sequence, sums hard per-channel decisions to form a test, and uses multiple tests to verify acquisition. The length of the matched filter and the number of verification tests are fixed. Optimization is then choosing thresholds to maximize performance based upon the receiver's degree of knowledge about the jammer ('side-information'). Four levels of side-information are considered, ranging from none to complete. The latter level results in a constant-false-alarm-rate (CFAR) design. At each level, performance sensitivity to threshold choice is analyzed. Robust thresholds are chosen to maximize performance as the jammer varies its power distribution, resulting in simple design rules which aid threshold selection. Performance results, which show that optimum distributions for the jammer power over the total FH bandwidth exist, are presented.

  17. Optimal causal filtering for 1 /fα-type noise in single-electrode EEG signals.

    PubMed

    Paris, Alan; Atia, George; Vosoughi, Azadeh; Berman, Stephen A

    2016-08-01

    Understanding the mode of generation and the statistical structure of neurological noise is one of the central problems of biomedical signal processing. We have developed a broad class of abstract biological noise sources we call hidden simplicial tissues. In the simplest cases, such tissue emits what we have named generalized van der Ziel-McWhorter (GVZM) noise which has a roughly 1/fα spectral roll-off. Our previous work focused on the statistical structure of GVZM frequency spectra. However, causality of processing operations (i.e., dependence only on the past) is an essential requirement for real-time applications to seizure detection and brain-computer interfacing. In this paper we outline the theoretical background for optimal causal time-domain filtering of deterministic signals embedded in GVZM noise. We present some of our early findings concerning the optimal filtering of EEG signals for the detection of steady-state visual evoked potential (SSVEP) responses and indicate the next steps in our ongoing research.

  18. Validity of the Catapult ClearSky T6 Local Positioning System for Team Sports Specific Drills, in Indoor Conditions

    PubMed Central

    Luteberget, Live S.; Spencer, Matt; Gilgien, Matthias

    2018-01-01

    Aim: The aim of the present study was to determine the validity of position, distance traveled and instantaneous speed of team sport players as measured by a commercially available local positioning system (LPS) during indoor use. In addition, the study investigated how the placement of the field of play relative to the anchor nodes and walls of the building affected the validity of the system. Method: The LPS (Catapult ClearSky T6, Catapult Sports, Australia) and the reference system [Qualisys Oqus, Qualisys AB, Sweden, (infra-red camera system)] were installed around the field of play to capture the athletes' motion. Athletes completed five tasks, all designed to imitate team-sports movements. The same protocol was completed in two sessions, one with an assumed optimal geometrical setup of the LPS (optimal condition), and once with a sub-optimal geometrical setup of the LPS (sub-optimal condition). Raw two-dimensional position data were extracted from both the LPS and the reference system for accuracy assessment. Position, distance and speed were compared. Results: The mean difference between the LPS and reference system for all position estimations was 0.21 ± 0.13 m (n = 30,166) in the optimal setup, and 1.79 ± 7.61 m (n = 22,799) in the sub-optimal setup. The average difference in distance was below 2% for all tasks in the optimal condition, while it was below 30% in the sub-optimal condition. Instantaneous speed showed the largest differences between the LPS and reference system of all variables, both in the optimal (≥35%) and sub-optimal condition (≥74%). The differences between the LPS and reference system in instantaneous speed were speed dependent, showing increased differences with increasing speed. Discussion: Measures of position, distance, and average speed from the LPS show low errors, and can be used confidently in time-motion analyses for indoor team sports. The calculation of instantaneous speed from LPS raw data is not valid. To enhance instantaneous speed calculation the application of appropriate filtering techniques to enhance the validity of such data should be investigated. For all measures, the placement of anchor nodes and the field of play relative to the walls of the building influence LPS output to a large degree. PMID:29670530

  19. SU-E-T-299: Dosimetric Characterization of Small Field in Small Animal Irradiator with Radiochromic Films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, S; Kim, K; Jung, H

    Purpose: The small animal irradiator has been used with small animals to optimize new radiation therapy as preclinical studies. The small animal was irradiated by whole- or partial-body exposure. In this study, the dosimetric characterizations of small animal irradiator were carried out in small field using Radiochromic films Material & Methods: The study was performed in commercial animal irradiator (XRAD-320, Precision x-ray Inc, North Brantford) with Radiochromic films (EBT2, Ashland Inc, Covington). The calibration curve was generated between delivery dose and optical density (red channel) and the films were scanned by and Epson 1000XL scanner (Epson America Inc., Long Beach,more » CA).We evaluated dosimetric characterization of irradiator using various filter supported by manufacturer in 260 kV. The various filters were F1 (2.0mm Aluminum (HVL = about 1.0mm Cu) and F2 (0.75mm Tin + 0.25mm Copper + 1.5mm Aluminum (HVL = about 3.7mm Cu). According to collimator size (3, 5, 7, 10 mm, we calculated percentage depth dose (PDD) and the surface –source distance(SSD) was 17.3 cm considering dose rate. Results: The films were irradiated in 260 kV, 10mA and we increased exposure time 5sec. intervals from 5sec. to 120sec. The calibration curve of films was fitted with cubic function. The correlation between optical density and dose was Y=0.1405 X{sup 3}−2.916 X{sup 2}+25.566 x+2.238 (R{sup 2}=0.994). Based on the calibration curve, we calculated PDD in various filters depending on collimator size. When compared PDD of specific depth (3mm) considering animal size, the difference by collimator size was 4.50% in free filter and F1 was 1.53% and F2 was within 2.17%. Conclusion: We calculated PDD curve in small animal irradiator depending on the collimator size and the kind of filter using the radiochromic films. The various PDD curve was acquired and it was possible to irradiate various dose using these curve.« less

  20. Examining responses of ecosystem carbon exchange to environmental changes using particle filtering mathod

    NASA Astrophysics Data System (ADS)

    Yokozawa, M.

    2017-12-01

    Attention has been paid to the agricultural field that could regulate ecosystem carbon exchange by water management and residual treatments. However, there have been less known about the dynamic responses of the ecosystem to environmental changes. In this study, focussing on paddy field, where CO2 emissions due to microbial decomposition of organic matter are suppressed and alternatively CH4 emitted under flooding condition during rice growth season and subsequently CO2 emission following the fallow season after harvest, the responses of ecosystem carbon exchange were examined. We conducted model data fusion analysis for examining the response of cropland-atmosphere carbon exchange to environmental variation. The used model consists of two sub models, paddy rice growth sub-model and soil decomposition sub-model. The crop growth sub-model mimics the rice plant growth processes including formation of reproductive organs as well as leaf expansion. The soil decomposition sub-model simulates the decomposition process of soil organic carbon. Assimilating the data on the time changes in CO2 flux measured by eddy covariance method, rice plant biomass, LAI and the final yield with the model, the parameters were calibrated using a stochastic optimization algorithm with a particle filter method. The particle filter method, which is one of the Monte Carlo filters, enable us to evaluating time changes in parameters based on the observed data until the time and to make prediction of the system. Iterative filtering and prediction with changing parameters and/or boundary condition enable us to obtain time changes in parameters governing the crop production as well as carbon exchange. In this study, we focused on the parameters related to crop production as well as soil carbon storage. As the results, the calibrated model with estimated parameters could accurately predict the NEE flux in the subsequent years. The temperature sensitivity, denoted by Q10s in the decomposition rate of soil organic carbon (SOC) were obtained as 1.4 for no cultivation period and 2.9 for cultivation period (submerged soil condition in flooding season). It suggests that the response of ecosystem carbon exchange differs due to SOC decomposition process which is sensitive to environmental variation during paddy rice cultivation period.

  1. Preliminary optical design of PANIC, a wide-field infrared camera for CAHA

    NASA Astrophysics Data System (ADS)

    Cárdenas, M. C.; Rodríguez Gómez, J.; Lenzen, R.; Sánchez-Blanco, E.

    2008-07-01

    In this paper, we present the preliminary optical design of PANIC (PAnoramic Near Infrared camera for Calar Alto), a wide-field infrared imager for the Calar Alto 2.2 m telescope. The camera optical design is a folded single optical train that images the sky onto the focal plane with a plate scale of 0.45 arcsec per 18 μm pixel. A mosaic of four Hawaii 2RG of 2k x 2k made by Teledyne is used as detector and will give a field of view of 31.9 arcmin x 31.9 arcmin. This cryogenic instrument has been optimized for the Y, J, H and K bands. Special care has been taken in the selection of the standard IR materials used for the optics in order to maximize the instrument throughput and to include the z band. The main challenges of this design are: to produce a well defined internal pupil which allows reducing the thermal background by a cryogenic pupil stop; the correction of off-axis aberrations due to the large field available; the correction of chromatic aberration because of the wide spectral coverage; and the capability of introduction of narrow band filters (~1%) in the system minimizing the degradation in the filter passband without a collimated stage in the camera. We show the optomechanical error budget and compensation strategy that allows our as built design to met the performances from an optical point of view. Finally, we demonstrate the flexibility of the design showing the performances of PANIC at the CAHA 3.5m telescope.

  2. Social and Demographic Factors Influencing Inferior Vena Cava Filter Retrieval at a Single Institution in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, S. Christian, E-mail: csmith@aemrc.arizona.edu; Shanks, Candace, E-mail: Candace.Shanks@osumc.edu; Guy, Gregory, E-mail: Gregory.Guy@osumc.edu

    PurposeRetrievable inferior vena cava filters (IVCFs) are associated with long-term adverse events that have increased interest in improving filter retrieval rates. Determining the influential patient social and demographic factors affecting IVCF retrieval is important to personalize patient management strategies and attain optimal patient care.Materials and MethodsSeven-hundred and sixty-two patients were retrospectively studied who had a filter placed at our institution between January 2011 and November 2013. Age, gender, race, cancer history, distance to residence from retrieval institution, and insurance status were identified for each patient, and those receiving retrievable IVCFs were further evaluated for retrieval rate and time to retrieval.ResultsOfmore » the 762 filters placed, 133 were permanent filters. Of the 629 retrievable filters placed, 406 met the inclusion criteria and were eligible for retrieval. Results revealed patients with Medicare were less likely to have their filters retrieved (p = 0.031). Older age was also associated with a lower likelihood of retrieval (p < 0.001) as was living further from the medical center (p = 0.027). Patients who were white and had Medicare were more likely than similarly insured black patients to have their filters retrieved (p = 0.024).ConclusionsThe retrieval rate of IVCFs was most influenced by insurance status, distance from the medical center, and age. Race was statistically significant only when combined with insurance status. The results of this study suggest that these patient groups may need closer follow-up in order to obtain optimal IVCF retrieval rates.« less

  3. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  4. Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter

    NASA Astrophysics Data System (ADS)

    Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.

    2013-02-01

    Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.

  5. Fast Physically Correct Refocusing for Sparse Light Fields Using Block-Based Multi-Rate View Interpolation.

    PubMed

    Huang, Chao-Tsung; Wang, Yu-Wen; Huang, Li-Ren; Chin, Jui; Chen, Liang-Gee

    2017-02-01

    Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- 34× speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.

  6. FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven

    2011-01-01

    High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.

  7. The design of wavefront coded imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Shun; Cen, Zhaofeng; Li, Xiaotong

    2016-10-01

    Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.

  8. Linearly polarized GHz magnetization dynamics of spin helix modes in the ferrimagnetic insulator Cu2OSeO3.

    PubMed

    Stasinopoulos, I; Weichselbaumer, S; Bauer, A; Waizner, J; Berger, H; Garst, M; Pfleiderer, C; Grundler, D

    2017-08-01

    Linear dichroism - the polarization dependent absorption of electromagnetic waves- is routinely exploited in applications as diverse as structure determination of DNA or polarization filters in optical technologies. Here filamentary absorbers with a large length-to-width ratio are a prerequisite. For magnetization dynamics in the few GHz frequency regime strictly linear dichroism was not observed for more than eight decades. Here, we show that the bulk chiral magnet Cu 2 OSeO 3 exhibits linearly polarized magnetization dynamics at an unexpectedly small frequency of about 2 GHz at zero magnetic field. Unlike optical filters that are assembled from filamentary absorbers, the magnet is shown to provide linear polarization as a bulk material for an extremely wide range of length-to-width ratios. In addition, the polarization plane of a given mode can be switched by 90° via a small variation in width. Our findings shed a new light on magnetization dynamics in that ferrimagnetic ordering combined with antisymmetric exchange interaction offers strictly linear polarization and cross-polarized modes for a broad spectrum of sample shapes at zero field. The discovery allows for novel design rules and optimization of microwave-to-magnon transduction in emerging microwave technologies.

  9. Real-time colouring and filtering with graphics shaders

    NASA Astrophysics Data System (ADS)

    Vohl, D.; Fluke, C. J.; Barnes, D. G.; Hassan, A. H.

    2017-11-01

    Despite the popularity of the Graphics Processing Unit (GPU) for general purpose computing, one should not forget about the practicality of the GPU for fast scientific visualization. As astronomers have increasing access to three-dimensional (3D) data from instruments and facilities like integral field units and radio interferometers, visualization techniques such as volume rendering offer means to quickly explore spectral cubes as a whole. As most 3D visualization techniques have been developed in fields of research like medical imaging and fluid dynamics, many transfer functions are not optimal for astronomical data. We demonstrate how transfer functions and graphics shaders can be exploited to provide new astronomy-specific explorative colouring methods. We present 12 shaders, including four novel transfer functions specifically designed to produce intuitive and informative 3D visualizations of spectral cube data. We compare their utility to classic colour mapping. The remaining shaders highlight how common computation like filtering, smoothing and line ratio algorithms can be integrated as part of the graphics pipeline. We discuss how this can be achieved by utilizing the parallelism of modern GPUs along with a shading language, letting astronomers apply these new techniques at interactive frame rates. All shaders investigated in this work are included in the open source software shwirl (Vohl 2017).

  10. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  11. Adaptive Laplacian filtering for sensorimotor rhythm-based brain-computer interfaces.

    PubMed

    Lu, Jun; McFarland, Dennis J; Wolpaw, Jonathan R

    2013-02-01

    Sensorimotor rhythms (SMRs) are 8-30 Hz oscillations in the electroencephalogram (EEG) recorded from the scalp over sensorimotor cortex that change with movement and/or movement imagery. Many brain-computer interface (BCI) studies have shown that people can learn to control SMR amplitudes and can use that control to move cursors and other objects in one, two or three dimensions. At the same time, if SMR-based BCIs are to be useful for people with neuromuscular disabilities, their accuracy and reliability must be improved substantially. These BCIs often use spatial filtering methods such as common average reference (CAR), Laplacian (LAP) filter or common spatial pattern (CSP) filter to enhance the signal-to-noise ratio of EEG. Here, we test the hypothesis that a new filter design, called an 'adaptive Laplacian (ALAP) filter', can provide better performance for SMR-based BCIs. An ALAP filter employs a Gaussian kernel to construct a smooth spatial gradient of channel weights and then simultaneously seeks the optimal kernel radius of this spatial filter and the regularization parameter of linear ridge regression. This optimization is based on minimizing the leave-one-out cross-validation error through a gradient descent method and is computationally feasible. Using a variety of kinds of BCI data from a total of 22 individuals, we compare the performances of ALAP filter to CAR, small LAP, large LAP and CSP filters. With a large number of channels and limited data, ALAP performs significantly better than CSP, CAR, small LAP and large LAP both in classification accuracy and in mean-squared error. Using fewer channels restricted to motor areas, ALAP is still superior to CAR, small LAP and large LAP, but equally matched to CSP. Thus, ALAP may help to improve the accuracy and robustness of SMR-based BCIs.

  12. Using spatiotemporal source separation to identify prominent features in multichannel data without sinusoidal filters.

    PubMed

    Cohen, Michael X

    2017-09-27

    The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Air filters from HVAC systems as possible source of volatile organic compounds (VOC) - laboratory and field assays

    NASA Astrophysics Data System (ADS)

    Schleibinger, Hans; Rüden, Henning

    The emission of volatile organic compounds (VOC) from air filters of HVAC systems was to be evaluated. In a first study carbonyl compounds (14 aldehydes and two ketones) were measured by reacting them with 2,4-dinitrophenylhydrazine (DNPH). Analysis was done by HPLC and UV detection. In laboratory experiments pieces of used and unused HVAC filters were incubated in test chambers. Filters to be investigated were taken from a filter bank of a large HVAC system in the centre of Berlin. First results show that - among those compounds - formaldehyde and acetone were found in higher concentrations in the test chambers filled with used filters in comparison to those with unused filters. Parallel field measurements were carried out at the prefilter and main filter banks of the two HVAC systems. Here measurements were carried out simultaneously before and after the filters to investigate whether those aldehydes or ketones arise from the filter material on site. Formaldehyde and acetone significantly increased in concentration after the filters of one HVAC system. In parallel experiments microorganisms were proved to be able to survive on air filters. Therefore, a possible source of formaldehyde and acetone might be microbes.

  14. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  15. Stochastic simulation and robust design optimization of integrated photonic filters

    NASA Astrophysics Data System (ADS)

    Weng, Tsui-Wei; Melati, Daniele; Melloni, Andrea; Daniel, Luca

    2017-01-01

    Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%-35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  16. Application of a territorial-based filtering algorithm in turbomachinery blade design optimization

    NASA Astrophysics Data System (ADS)

    Bahrami, Salman; Khelghatibana, Maryam; Tribes, Christophe; Yi Lo, Suk; von Fellenberg, Sven; Trépanier, Jean-Yves; Guibault, François

    2017-02-01

    A territorial-based filtering algorithm (TBFA) is proposed as an integration tool in a multi-level design optimization methodology. The design evaluation burden is split between low- and high-cost levels in order to properly balance the cost and required accuracy in different design stages, based on the characteristics and requirements of the case at hand. TBFA is in charge of connecting those levels by selecting a given number of geometrically different promising solutions from the low-cost level to be evaluated in the high-cost level. Two test case studies, a Francis runner and a transonic fan rotor, have demonstrated the robustness and functionality of TBFA in real industrial optimization problems.

  17. Standing Helicon Wave Induced by a Rapidly Bent Magnetic Field in Plasmas.

    PubMed

    Takahashi, Kazunori; Takayama, Sho; Komuro, Atsushi; Ando, Akira

    2016-04-01

    An electron energy probability function and a rf magnetic field are measured in a rf hydrogen helicon source, where axial and transverse static magnetic fields are applied to the source by solenoids and to the diffusion chamber by filter magnets, respectively. It is demonstrated that the helicon wave is reflected by the rapidly bent magnetic field and the resultant standing wave heats the electrons between the source and the magnetic filter, while the electron cooling effect by the magnetic filter is maintained. It is interpreted that the standing wave is generated by the presence of a spatially localized change of a refractive index.

  18. Standing Helicon Wave Induced by a Rapidly Bent Magnetic Field in Plasmas

    NASA Astrophysics Data System (ADS)

    Takahashi, Kazunori; Takayama, Sho; Komuro, Atsushi; Ando, Akira

    2016-04-01

    An electron energy probability function and a rf magnetic field are measured in a rf hydrogen helicon source, where axial and transverse static magnetic fields are applied to the source by solenoids and to the diffusion chamber by filter magnets, respectively. It is demonstrated that the helicon wave is reflected by the rapidly bent magnetic field and the resultant standing wave heats the electrons between the source and the magnetic filter, while the electron cooling effect by the magnetic filter is maintained. It is interpreted that the standing wave is generated by the presence of a spatially localized change of a refractive index.

  19. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    PubMed Central

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  20. Developing a Fundamental Model for an Integrated GPS/INS State Estimation System with Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Canfield, Stephen

    1999-01-01

    This work will demonstrate the integration of sensor and system dynamic data and their appropriate models using an optimal filter to create a robust, adaptable, easily reconfigurable state (motion) estimation system. This state estimation system will clearly show the application of fundamental modeling and filtering techniques. These techniques are presented at a general, first principles level, that can easily be adapted to specific applications. An example of such an application is demonstrated through the development of an integrated GPS/INS navigation system. This system acquires both global position data and inertial body data, to provide optimal estimates of current position and attitude states. The optimal states are estimated using a Kalman filter. The state estimation system will include appropriate error models for the measurement hardware. The results of this work will lead to the development of a "black-box" state estimation system that supplies current motion information (position and attitude states) that can be used to carry out guidance and control strategies. This black-box state estimation system is developed independent of the vehicle dynamics and therefore is directly applicable to a variety of vehicles. Issues in system modeling and application of Kalman filtering techniques are investigated and presented. These issues include linearized models of equations of state, models of the measurement sensors, and appropriate application and parameter setting (tuning) of the Kalman filter. The general model and subsequent algorithm is developed in Matlab for numerical testing. The results of this system are demonstrated through application to data from the X-33 Michael's 9A8 mission and are presented in plots and simple animations.

  1. Tunable Microwave Filter Design Using Thin-Film Ferroelectric Varactors

    NASA Astrophysics Data System (ADS)

    Haridasan, Vrinda

    Military, space, and consumer-based communication markets alike are moving towards multi-functional, multi-mode, and portable transceiver units. Ferroelectric-based tunable filter designs in RF front-ends are a relatively new area of research that provides a potential solution to support wideband and compact transceiver units. This work presents design methodologies developed to optimize a tunable filter design for system-level integration, and to improve the performance of a ferroelectric-based tunable bandpass filter. An investigative approach to find the origins of high insertion loss exhibited by these filters is also undertaken. A system-aware design guideline and figure of merit for ferroelectric-based tunable band- pass filters is developed. The guideline does not constrain the filter bandwidth as long as it falls within the range of the analog bandwidth of a system's analog to digital converter. A figure of merit (FOM) that optimizes filter design for a specific application is presented. It considers the worst-case filter performance parameters and a tuning sensitivity term that captures the relation between frequency tunability and the underlying material tunability. A non-tunable parasitic fringe capacitance associated with ferroelectric-based planar capacitors is confirmed by simulated and measured results. The fringe capacitance is an appreciable proportion of the tunable capacitance at frequencies of X-band and higher. As ferroelectric-based tunable capac- itors form tunable resonators in the filter design, a proportionally higher fringe capacitance reduces the capacitance tunability which in turn reduces the frequency tunability of the filter. Methods to reduce the fringe capacitance can thus increase frequency tunability or indirectly reduce the filter insertion-loss by trading off the increased tunability achieved to lower loss. A new two-pole tunable filter topology with high frequency tunability (> 30%), steep filter skirts, wide stopband rejection, and constant bandwidth is designed, simulated, fabricated and measured. The filters are fabricated using barium strontium titanate (BST) varactors. Electromagnetic simulations and measured results of the tunable two-pole ferroelectric filter are analyzed to explore the origins of high insertion loss in ferroelectric filters. The results indicate that the high-permittivity of the BST (a ferroelectric) not only makes the filters tunable and compact, but also increases the conductive loss of the ferroelectric-based tunable resonators which translates into high insertion loss in ferroelectric filters.

  2. NTilt as an improved enhanced tilt derivative filter for edge detection of potential field anomalies

    NASA Astrophysics Data System (ADS)

    Nasuti, Yasin; Nasuti, Aziz

    2018-07-01

    We develop a new phase-based filter to enhance the edges of geological sources from potential-field data called NTilt, which utilizes the vertical derivative of the analytical signal in different orders to the tilt derivative equation. This will equalize signals from sources buried at different depths. In order to evaluate the designed filter, we compared the results obtained from our filter with those from recently applied methods, testing against both synthetic data, and measured data from the Finnmark region of North Norway were used. The results demonstrate that the new filter permits better definition of the edges of causative anomalies, as well as better highlighting several anomalies that either are not shown in tilt derivative and other methods or not very well defined. The proposed technique also shows improvements in delineation of the actual edges of deep-seated anomalies compared to tilt derivative and other methods. The NTilt filter provides more accurate and sharper edges and makes the nearby anomalies more distinguishable, and also can avoid bringing some additional false edges reducing the ambiguity in potential field interpretations. This filter, thus, appears to be promising in providing a better qualitative interpretation of the gravity and magnetic data in comparison with the more commonly used filters.

  3. Images and Spectral Performance of WFC3 Interference Filters

    NASA Technical Reports Server (NTRS)

    Quijada, Manuel A.; Boucarut, R.; Telfer, R.; Baggett, S.; Quijano, J. Kim; Allen, George; Arsenovic, Peter

    2006-01-01

    The Wide Field Camera 3 (WFC3) is a panchromatic imager that will be deployed in the Hubble Space Telescope (HST). The mission of the WFC3 is to enhance HST1s imaging capability in the ultraviolet, visible and near-infrared spectral regions. Together with a wavelength coverage spanning 2000A to 1.7 micron, the WFC3 high sensitivity, high spatial resolution, and large field-of-view provide the astronomer with an unprecedented set of tools for exploring all types of exciting astrophysical terrain and for addressing many key questions in astronomy today. The filter compliment, which includes broad, medium, and narrow band filters, naturally reflects the diversity of astronomical programs to be targeted with WFC3. The WFC3 holds 61 UVIS filters elements, 14 IR filters, and 3 dispersive elements. During ground testing, the majority of the UVIS filters were found to exhibit excellent performance consistent with or exceeding expectations; however, a subset of filters showed considerable ghost images; some with relative intensity as high as 10-15%. Replacement filters with band-defining coatings that substantially reduce these ghost images were designed and procured. A state-of-the-art characterization setup was developed to measured the intensity of ghost images, focal shift, wedge direction , transmitted uniformity and surface feature of filters that could effect uniform flat field images. We will report on this new filter characterization methods, as well as the spectral performance measurements of the in-band transmittance and blocking.

  4. Further evaluation of the NWF filter for the purification of Plasmodium vivax-infected erythrocytes.

    PubMed

    Li, Jiangyan; Tao, Zhiyong; Li, Qian; Brashear, Awtum; Wang, Ying; Xia, Hui; Fang, Qiang; Cui, Liwang

    2017-05-17

    Isolation of Plasmodium-infected red blood cells (iRBCs) from clinical blood samples is often required for experiments, such as ex vivo drug assays, in vitro invasion assays and genome sequencing. Current methods for removing white blood cells (WBCs) from malaria-infected blood are time-consuming or costly. A prototype non-woven fabric (NWF) filter was developed for the purification of iRBCs, which showed great efficiency for removing WBCs in a pilot study. Previous work was performed with prototype filters optimized for processing 5-10 mL of blood. With the commercialization of the filters, this study aims to evaluate the efficiency and suitability of the commercial NWF filter for the purification of Plasmodium vivax-infected RBCs in smaller volumes of blood and to compare its performance with that of Plasmodipur ® filters. Forty-three clinical P. vivax blood samples taken from symptomatic patients attending malaria clinics at the China-Myanmar border were processed using the NWF filters in a nearby field laboratory. The numbers of WBCs and iRBCs and morphology of P. vivax parasites in the blood samples before and after NWF filtration were compared. The viability of P. vivax parasites after filtration from 27 blood samples was examined by in vitro short-term culture. In addition, the effectiveness of the NWF filter for removing WBCs was compared with that of the Plasmodipur ® filter in six P. vivax blood samples. Filtration of 1-2 mL of P. vivax-infected blood with the NWF filter removed 99.68% WBCs. The densities of total iRBCs, ring and trophozoite stages before and after filtration were not significantly different (P > 0.05). However, the recovery rates of schizont- and gametocyte-infected RBCs, which were minor parasite stages in the clinical samples, were relatively low. After filtration, the P. vivax parasites did not show apparent morphological changes. Culture of 27 P. vivax-infected blood samples after filtration showed that parasites successfully matured into the schizont stage. The WBC removal rates and iRBC recovery rates were not significantly different between the NWF and Plasmodipur ® filters (P > 0.05). When tested with 1-2 mL of P. vivax-infected blood, the NWF filter could effectively remove WBCs and the recovery rates for ring- and trophozoite-iRBCs were high. P. vivax parasites after filtration could be successfully cultured in vitro to reach maturity. The performance of the NWF and Plasmodipur ® filters for removing WBCs and recovering iRBCs was comparable.

  5. Linear theory for filtering nonlinear multiscale systems with model error

    PubMed Central

    Berry, Tyrus; Harlim, John

    2014-01-01

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829

  6. Technology for satellite power conversion

    NASA Technical Reports Server (NTRS)

    Gouker, M. A.; Campbell, D. P.; Gallagher, J. J.

    1987-01-01

    Components were examined that will be needed for high frequency rectenna devices. The majority of the effort was spent on measuring the directivity and efficiency of the half-wave dipole antenna. It is felt that the antenna and diode should be roughly optimized before they are combined into a rectenna structure. An integrated low pass filter had to be added to the antenna structure in order to facilitate the field pattern measurements. A calculation was also made of the power density of the Earth's radiant energy as seen by satellites in Earth orbit. Finally, the feasibility of using a Metal-Oxide-Metal (MOM) diode for rectification of the received power was assessed.

  7. Fusion of Inertial Sensors and Orthogonal Frequency Division Multiplexed (OFDM) Signals of Opportunity for Unassisted Navigation

    DTIC Science & Technology

    2009-03-01

    P Hwang . Introduction to Random Signals and Applied Kalman Filtering. John Wiley & Sons, New York, 1997. ISBN 0-471-12839-2. 4. Burr, A. “The...communication signals, the need for the ref- erence receiver is reduced or possibly removed entirely. This research uses a Kalman Filter (KF) to optimally...15 2.5 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5.1 State Propogation

  8. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  9. Wavelet-Based Blind Superresolution from Video Sequence and in MRI

    DTIC Science & Technology

    2005-12-31

    in Fig. 4(e) and (f), respectively. The PSNR- based optimal threshold gives better noise filtering but poor deblurring [see Fig. 4(c) and (e)] while...that ultimately produces the deblurred , noise filtered, superresolved image. Finite support linear shift invariant blurs are reasonable to assume... Deblurred and Noise Filtered HR Image Cameras with different PSFs Figure 1: Multichannel Blind Superresolution Model condition [11] on the zeros of the

  10. Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

    NASA Astrophysics Data System (ADS)

    Bruno, Marcelo G. S.; Dias, Stiven S.

    2014-12-01

    We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

  11. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    PubMed Central

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  12. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-12-19

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

  13. Methods to assess carbonaceous aerosol sampling artifacts for IMPROVE and other long-term networks.

    PubMed

    Watson, John G; Chow, Judith C; Chen, L W Antony; Frank, Neil H

    2009-08-01

    Volatile organic compounds (VOCs) and semi-volatile organic compounds (SVOCs) adsorb to quartz fiber filters during fine and coarse particulate matter (PM2.5 and PM10, respectively) sampling for thermal/optical carbon analysis that measures organic carbon (OC) and elemental carbon (EC). Particulate SVOCs can evaporate after collection, with a small portion adsorbed within the filter. Adsorbed organic gases are measured as particulate OC, so passive field blanks, backup filters, prefilter organic denuders, and regression methods have been applied to compensate for positive OC artifacts in several long-term chemical speciation networks. Average backup filter OC levels from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network were approximately 19% higher than field blank values. This difference is within the standard deviation of the average and likely results from low SVOC concentrations in the rural to remote environments of most IMPROVE sites. Backup filters from an urban (Fort Meade, MD) site showed twice the OC levels of field blanks. Sectioning backup filters from top to bottom showed nonuniform OC densities within the filter, contrary to the assumption that VOCs and SVOCs on a backup filter equal those on the front filter. This nonuniformity may be partially explained by evaporation and readsorption of vapors in different parts of the front and backup quartz fiber filter owing to temperature, relative humidity, and ambient concentration changes throughout a 24-hr sample duration. OC-PM2.5 regression analysis and organic denuder approaches demonstrate negative sampling artifact from both Teflon membrane and quartz fiber filters.

  14. Bird community response to filter strips in Maryland

    USGS Publications Warehouse

    Blank, P.J.; Dively, G.P.; Gill, D.E.; Rewa, C.A.

    2011-01-01

    Filter strips are strips of herbaceous vegetation planted along agricultural field margins adjacent to streams or wetlands and are designed to intercept sediment, nutrients, and agrichemicals. Roughly 16,000 ha of filter strips have been established in Maryland through the United States Department of Agriculture's Conservation Reserve Enhancement Program. Filter strips often represent the only uncultivated herbaceous areas on farmland in Maryland and therefore may be important habitat for early-successional bird species. Most filter strips in Maryland are planted to either native warm-season grasses or cool-season grasses and range in width from 10.7 m to 91.4 m. From 2004 to 2007 we studied the breeding and wintering bird communities in filter strips adjacent to wooded edges and non-buffered field edges and the effect that grass type and width of filter strips had on bird community composition. We used 5 bird community metrics (total bird density, species richness, scrub-shrub bird density, grassland bird density, and total avian conservation value), species-specific densities, nest densities, and nest survival estimates to assess the habitat value of filter strips for birds. Breeding and wintering bird community metrics were greater in filter strips than in non-buffered field edges but did not differ between cool-season and warm-season grass filter strips. Most breeding bird community metrics were negatively related to the percent cover of orchardgrass (Dactylis glomerata) in ???1 yr. Breeding bird density was greater in narrow (60 m) filter strips. Our results suggest that narrow filter strips adjacent to wooded edges can provide habitat for many bird species but that wide filter strips provide better habitat for grassland birds, particularly obligate grassland species. If bird conservation is an objective, avoid planting orchardgrass in filter strips and reduce or eliminate orchardgrass from filter strips through management practices. Copyright ?? 2011 The Wildlife Society.

  15. Fluence-field modulated x-ray CT using multiple aperture devices

    NASA Astrophysics Data System (ADS)

    Stayman, J. Webster; Mathews, Aswin; Zbijewski, Wojciech; Gang, Grace; Siewerdsen, Jeffrey; Kawamoto, Satomi; Blevis, Ira; Levinson, Reuven

    2016-03-01

    We introduce a novel strategy for fluence field modulation (FFM) in x-ray CT using multiple aperture devices (MADs). MAD filters permit FFM by blocking or transmitting the x-ray beam on a fine (0.1-1 mm) scale. The filters have a number of potential advantages over other beam modulation strategies including the potential for a highly compact design, modest actuation speed and acceleration requirements, and spectrally neutral filtration due to their essentially binary action. In this work, we present the underlying MAD filtration concept including a design process to achieve a specific class of FFM patterns. A set of MAD filters is fabricated using a tungsten laser sintering process and integrated into an x-ray CT test bench. A characterization of the MAD filters is conducted and compared to traditional attenuating bowtie filters and the ability to flatten the fluence profile for a 32 cm acrylic phantom is demonstrated. MAD-filtered tomographic data was acquired on the CT test bench and reconstructed without artifacts associated with the MAD filter. These initial studies suggest that MAD-based FFM is appropriate for integration in clinical CT system to create patient-specific fluence field profile and reduce radiation exposures.

  16. Collisional considerations in axial-collection plasma mass filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ochs, I. E.; Gueroult, R.; Fisch, N. J.

    The chemical inhomogeneity of nuclear waste makes chemical separations difficult, while the correlation between radioactivity and nuclear mass makes mass-based separation, and in particular plasma-based separation, an attractive alternative. Here, we examine a particular class of plasma mass filters, namely filters in which (a) species of different masses are collected along magnetic field lines at opposite ends of an open-field-line plasma device and (b) gyro-drift effects are important for the separation process. Using an idealized cylindrical model, we derive a set of dimensionless parameters which provide minimum necessary conditions for an effective mass filter function in the presence of ion-ionmore » and ion-neutral collisions. Through simulations of the constant-density profile, turbulence-free devices, we find that these parameters accurately describe the mass filter performance in more general magnetic geometries. We then use these parameters to study the design and upgrade of current experiments, as well as to derive general scalings for the throughput of production mass filters. Most importantly, we find that ion temperatures above 3 eV and magnetic fields above 104 G are critical to ensure a feasible mass filter function when operating at an ion density of 10 13 cm –3.« less

  17. Collisional considerations in axial-collection plasma mass filters

    DOE PAGES

    Ochs, I. E.; Gueroult, R.; Fisch, N. J.; ...

    2017-04-01

    The chemical inhomogeneity of nuclear waste makes chemical separations difficult, while the correlation between radioactivity and nuclear mass makes mass-based separation, and in particular plasma-based separation, an attractive alternative. Here, we examine a particular class of plasma mass filters, namely filters in which (a) species of different masses are collected along magnetic field lines at opposite ends of an open-field-line plasma device and (b) gyro-drift effects are important for the separation process. Using an idealized cylindrical model, we derive a set of dimensionless parameters which provide minimum necessary conditions for an effective mass filter function in the presence of ion-ionmore » and ion-neutral collisions. Through simulations of the constant-density profile, turbulence-free devices, we find that these parameters accurately describe the mass filter performance in more general magnetic geometries. We then use these parameters to study the design and upgrade of current experiments, as well as to derive general scalings for the throughput of production mass filters. Most importantly, we find that ion temperatures above 3 eV and magnetic fields above 104 G are critical to ensure a feasible mass filter function when operating at an ion density of 10 13 cm –3.« less

  18. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface.

    PubMed

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.

  19. Optical single side-band Nyquist PAM-4 transmission using dual-drive MZM modulation and direct detection.

    PubMed

    Zhu, Mingyue; Zhang, Jing; Yi, Xingwen; Ying, Hao; Li, Xiang; Luo, Ming; Song, Yingxiong; Huang, Xiatao; Qiu, Kun

    2018-03-19

    We present the design and optimization of the optical single side-band (SSB) Nyquist four-level pulse amplitude modulation (PAM-4) transmission using dual-drive Mach-Zehnder modulator (DDMZM)modulation and direct detection (DD), aiming at the C-band cost-effective, high-speed and long-distance transmission. At the transmitter, the laser line width should be small to avoid the phase noise to amplitude noise conversion and equalization-enhanced phase noise due to the large chromatic dispersion (CD). The optical SSB signal is generated after optimizing the optical modulation index (OMI) and hence the minimum phase condition which is required by the Kramers-Kronig (KK) receiver can also be satisfied. At the receiver, a simple AC-coupled photodiode (PD) is used and a virtual carrier is added for the KK operation to alleviate the signal-to-signal beating interference (SSBI).A Volterra filter (VF) is cascaded for remaining nonlinearities mitigation. When the fiber nonlinearity becomes significant, we elect to use an optical band-pass filter with offset filtering. It can suppress the simulated Brillouin scattering and the conjugated distortion by filtering out the imaging frequency components. With our design and optimization, we achieve single-channel, single polarization 102.4-Gb/s Nyquist PAM-4 over 800-km standard single-mode fiber (SSMF).

  20. Kalman filter-based EM-optical sensor fusion for needle deflection estimation.

    PubMed

    Jiang, Baichuan; Gao, Wenpeng; Kacher, Daniel; Nevo, Erez; Fetics, Barry; Lee, Thomas C; Jayender, Jagadeesan

    2018-04-01

    In many clinical procedures such as cryoablation that involves needle insertion, accurate placement of the needle's tip at the desired target is the major issue for optimizing the treatment and minimizing damage to the neighboring anatomy. However, due to the interaction force between the needle and tissue, considerable error in intraoperative tracking of the needle tip can be observed as needle deflects. In this paper, measurements data from an optical sensor at the needle base and a magnetic resonance (MR) gradient field-driven electromagnetic (EM) sensor placed 10 cm from the needle tip are used within a model-integrated Kalman filter-based sensor fusion scheme. Bending model-based estimations and EM-based direct estimation are used as the measurement vectors in the Kalman filter, thus establishing an online estimation approach. Static tip bending experiments show that the fusion method can reduce the mean error of the tip position estimation from 29.23 mm of the optical sensor-based approach to 3.15 mm of the fusion-based approach and from 39.96 to 6.90 mm, at the MRI isocenter and the MRI entrance, respectively. This work established a novel sensor fusion scheme that incorporates model information, which enables real-time tracking of needle deflection with MRI compatibility, in a free-hand operating setup.

  1. Filtered epithermal quasi-monoenergetic neutron beams at research reactor facilities.

    PubMed

    Mansy, M S; Bashter, I I; El-Mesiry, M S; Habib, N; Adib, M

    2015-03-01

    Filtered neutron techniques were applied to produce quasi-monoenergetic neutron beams in the energy range of 1.5-133keV at research reactors. A simulation study was performed to characterize the filter components and transmitted beam lines. The filtered beams were characterized in terms of the optimal thickness of the main and additive components. The filtered neutron beams had high purity and intensity, with low contamination from the accompanying thermal emission, fast neutrons and γ-rays. A computer code named "QMNB" was developed in the "MATLAB" programming language to perform the required calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Multispectral interference filter arrays with compensation of angular dependence or extended spectral range.

    PubMed

    Frey, Laurent; Masarotto, Lilian; Armand, Marilyn; Charles, Marie-Lyne; Lartigue, Olivier

    2015-05-04

    Thin film Fabry-Perot filter arrays with high selectivity can be realized with a single patterning step, generating a spatial modulation of the effective refractive index in the optical cavity. In this paper, we investigate the ability of this technology to address two applications in the field of image sensors. First, the spectral tuning may be used to compensate the blue-shift of the filters in oblique incidence, provided the filter array is located in an image plane of an optical system with higher field of view than aperture angle. The technique is analyzed for various types of filters and experimental evidence is shown with copper-dielectric infrared filters. Then, we propose a design of a multispectral filter array with an extended spectral range spanning the visible and near-infrared range, using a single set of materials and realizable on a single substrate.

  3. Designing a composite correlation filter based on iterative optimization of training images for distortion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Elbouz, M.; Alfalou, A.; Brosseau, C.

    2017-06-01

    We present a novel method to optimize the discrimination ability and noise robustness of composite filters. This method is based on the iterative preprocessing of training images which can extract boundary and detailed feature information of authentic training faces, thereby improving the peak-to-correlation energy (PCE) ratio of authentic faces and to be immune to intra-class variance and noise interference. By adding the training images directly, one can obtain a composite template with high discrimination ability and robustness for face recognition task. The proposed composite correlation filter does not involve any complicated mathematical analysis and computation which are often required in the design of correlation algorithms. Simulation tests have been conducted to check the effectiveness and feasibility of our proposal. Moreover, to assess robustness of composite filters using receiver operating characteristic (ROC) curves, we devise a new method to count the true positive and false positive rates for which the difference between PCE and threshold is involved.

  4. Quantum demolition filtering and optimal control of unstable systems.

    PubMed

    Belavkin, V P

    2012-11-28

    A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one.

  5. Retinal blood vessel extraction using tunable bandpass filter and fuzzy conditional entropy.

    PubMed

    Sil Kar, Sudeshna; Maity, Santi P

    2016-09-01

    Extraction of blood vessels on retinal images plays a significant role for screening of different opthalmologic diseases. However, accurate extraction of the entire and individual type of vessel silhouette from the noisy images with poorly illuminated background is a complicated task. To this aim, an integrated system design platform is suggested in this work for vessel extraction using a sequential bandpass filter followed by fuzzy conditional entropy maximization on matched filter response. At first noise is eliminated from the image under consideration through curvelet based denoising. To include the fine details and the relatively less thick vessel structures, the image is passed through a bank of sequential bandpass filter structure optimized for contrast enhancement. Fuzzy conditional entropy on matched filter response is then maximized to find the set of multiple optimal thresholds to extract the different types of vessel silhouettes from the background. Differential Evolution algorithm is used to determine the optimal gain in bandpass filter and the combination of the fuzzy parameters. Using the multiple thresholds, retinal image is classified as the thick, the medium and the thin vessels including neovascularization. Performance evaluated on different publicly available retinal image databases shows that the proposed method is very efficient in identifying the diverse types of vessels. Proposed method is also efficient in extracting the abnormal and the thin blood vessels in pathological retinal images. The average values of true positive rate, false positive rate and accuracy offered by the method is 76.32%, 1.99% and 96.28%, respectively for the DRIVE database and 72.82%, 2.6% and 96.16%, respectively for the STARE database. Simulation results demonstrate that the proposed method outperforms the existing methods in detecting the various types of vessels and the neovascularization structures. The combination of curvelet transform and tunable bandpass filter is found to be very much effective in edge enhancement whereas fuzzy conditional entropy efficiently distinguishes vessels of different widths. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. An estimator-predictor approach to PLL loop filter design

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Hurd, W. J.

    1986-01-01

    An approach to the design of digital phase locked loops (DPLLs), using estimation theory concepts in the selection of a loop filter, is presented. The key concept is that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor. The estimator provides recursive estimates of phase, frequency, and higher order derivatives, while the predictor compensates for the transport lag inherent in the loop. This decomposition results in a straightforward loop filter design procedure, enabling use of techniques from optimal and sub-optimal estimation theory. A design example for a particular choice of estimator is presented, followed by analysis of the associated bandwidth, gain margin, and steady state errors caused by unmodeled dynamics. This approach is under consideration for the design of the Deep Space Network (DSN) Advanced Receiver Carrier DPLL.

  7. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  8. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE PAGES

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    2017-04-17

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  9. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  10. TU-G-BRB-03: Iterative Optimization of Normalized Transmission Maps for IMRT Using Arbitrary Beam Profiles.

    PubMed

    Choi, K; Suh, T; Xing, L

    2012-06-01

    Newly available flattening filter free (FFF) beam increases the dose rate by 3∼6 times at the central axis. In reality, even flattening filtered beam is not perfectly flat. In addition, the beam profiles across different fields may not have the same amplitude. The existing inverse planning formalism based on the total-variation of intensity (or fluence) map cannot consider these properties of beam profiles. The purpose of this work is to develop a novel dose optimization scheme with incorporation of the inherent beam profiles to maximally utilize the efficacy of arbitrary beam profiles while preserving the convexity of the optimization problem. To increase the accuracy of the problem formalism, we decompose the fluence map as an elementwise multiplication of the inherent beam profile and a normalized transmission map (NTM). Instead of attempting to optimize the fluence maps directly, we optimize the NTMs and beam profiles separately. A least-squares problem constrained by total-variation of NTMs is developed to derive the optimal fluence maps that balances the dose conformality and FFF beam delivery efficiency. With the resultant NTMs, we find beam profiles to renormalized NTMs. The proposed method iteratively optimizes and renormalizes NTMs in a closed loop manner. The advantage of the proposed method is demonstrated by using a head-neck case with flat beam profiles and a prostate case with non-flat beam profiles. The obtained NTMs achieve more conformal dose distribution while preserving piecewise constancy compared to the existing solution. The proposed formalism has two major advantages over the conventional inverse planning schemes: (1) it provides a unified framework for inverse planning with beams of arbitrary fluence profiles, including treatment with beams of mixed fluence profiles; (2) the use of total-variation constraints on NTMs allows us to optimally balance the dose confromality and deliverability for a given beam configuration. This project was supported in part by grants from the National Science Foundation (0854492), National Cancer Institute (1R01 CA104205), and Leading Foreign Research Institute Recruitment Program by the Korean Ministry of Education, Science and Technology (K20901000001-09E0100-00110). To the authors' best knowledgement, there is no conflict interest. © 2012 American Association of Physicists in Medicine.

  11. Filtrage Lineaire par Morceaux Avec Petit Bruit d’Observation (Piecewise Linear Filtering with Small Observation Noise)

    DTIC Science & Technology

    1990-11-19

    stir divers exemple-s le comportement des filtres l)r0pose5 par ra.)pDort ceux du processus estliner et dti filtre optimal obtenu de fa~on approch6e...Piecewise monotone filtering with small observation noise, Siam J., Control Optim. 20, 261-285, 1989 . Vii [10 W.ll. Fleming and R.W. Rishel...Milbeiro, de Oliveira : Filtres approch~s pour un probl~me de filtrage non lin~aire discret avec petit bruit d’observation,rapport INVRIA, 1142. 1989

  12. EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN.

    PubMed

    Nguyen, Dang-Manh; Peters, Jörg

    2017-01-01

    Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP 0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP 0 filters outperform the more complex known boundary filters. NP 0 filters typically reduce the L ∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP 0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute.

  13. EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN*

    PubMed Central

    Nguyen, Dang-Manh; Peters, Jörg

    2017-01-01

    Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP0 filters outperform the more complex known boundary filters. NP0 filters typically reduce the L∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute. PMID:29081643

  14. Seeing the unseen: Complete volcano deformation fields by recursive filtering of satellite radar interferograms

    NASA Astrophysics Data System (ADS)

    Gonzalez, Pablo J.

    2017-04-01

    Automatic interferometric processing of satellite radar data has emerged as a solution to the increasing amount of acquired SAR data. Automatic SAR and InSAR processing ranges from focusing raw echoes to the computation of displacement time series using large stacks of co-registered radar images. However, this type of interferometric processing approach demands the pre-described or adaptive selection of multiple processing parameters. One of the interferometric processing steps that much strongly influences the final results (displacement maps) is the interferometric phase filtering. There are a large number of phase filtering methods, however the "so-called" Goldstein filtering method is the most popular [Goldstein and Werner, 1998; Baran et al., 2003]. The Goldstein filter needs basically two parameters, the size of the window filter and a parameter to indicate the filter smoothing intensity. The modified Goldstein method removes the need to select the smoothing parameter based on the local interferometric coherence level, but still requires to specify the dimension of the filtering window. An optimal filtered phase quality usually requires careful selection of those parameters. Therefore, there is an strong need to develop automatic filtering methods to adapt for automatic processing, while maximizing filtered phase quality. Here, in this paper, I present a recursive adaptive phase filtering algorithm for accurate estimation of differential interferometric ground deformation and local coherence measurements. The proposed filter is based upon the modified Goldstein filter [Baran et al., 2003]. This filtering method improves the quality of the interferograms by performing a recursive iteration using variable (cascade) kernel sizes, and improving the coherence estimation by locally defringing the interferometric phase. The method has been tested using simulations and real cases relevant to the characteristics of the Sentinel-1 mission. Here, I present real examples from C-band interferograms showing strong and weak deformation gradients, with moderate baselines ( 100-200 m) and variable temporal baselines of 70 and 190 days over variable vegetated volcanoes (Mt. Etna, Hawaii and Nyragongo-Nyamulagira). The differential phase of those examples show intense localized volcano deformation and also vast areas of small differential phase variation. The proposed method outperforms the classical Goldstein and modified Goldstein filters by preserving subtle phase variations where the deformation fringe rate is high, and effectively suppressing phase noise in smoothly phase variation regions. Finally, this method also has the additional advantage of not requiring input parameters, except for the maximum filtering kernel size. References: Baran, I., Stewart, M.P., Kampes, B.M., Perski, Z., Lilly, P., (2003) A modification to the Goldstein radar interferogram filter. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 9., doi:10.1109/TGRS.2003.817212 Goldstein, R.M., Werner, C.L. (1998) Radar interferogram filtering for geophysical applications, Geophysical Research Letters, vol. 25, No. 21, 4035-4038, doi:10.1029/1998GL900033

  15. Chlorine residuals and haloacetic acid reduction in rapid sand filtration.

    PubMed

    Chuang, Yi-Hsueh; Wang, Gen-Shuch; Tung, Hsin-hsin

    2011-11-01

    It is quite rare to find biodegradation in rapid sand filtration for drinking water treatment. This might be due to frequent backwashes and low substrate levels. High chlorine concentrations may inhibit biofilm development, especially for plants with pre-chlorination. However, in tropical or subtropical regions, bioactivity on the sand surface may be quite significant due to high biofilm development--a result of year-round high temperature. The objective of this study is to explore the correlation between biodegradation and chlorine concentration in rapid sand filters, especially for the water treatment plants that practise pre-chlorination. In this study, haloacetic acid (HAA) biodegradation was found in conventional rapid sand filters practising pre-chlorination. Laboratory column studies and field investigations were conducted to explore the association between the biodegradation of HAAs and chlorine concentrations. The results showed that chlorine residual was an important factor that alters bioactivity development. A model based on filter influent and effluent chlorine was developed for determining threshold chlorine for biodegradation. From the model, a temperature independent chlorine concentration threshold (Cl(threshold)) for biodegradation was estimated at 0.46-0.5mgL(-1). The results imply that conventional filters with adequate control could be conducive to bioactivity, resulting in lower HAA concentrations. Optimizing biodegradable disinfection by-product removal in conventional rapid sand filter could be achieved with minor variation and a lower-than-Cl(threshold) influent chlorine concentration. Bacteria isolation was also carried out, successfully identifying several HAA degraders. These degraders are very commonly seen in drinking water systems and can be speculated as the main contributor of HAA loss. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  17. Silicon oxide nanoparticles doped PQ-PMMA for volume holographic imaging filters.

    PubMed

    Luo, Yuan; Russo, Juan M; Kostuk, Raymond K; Barbastathis, George

    2010-04-15

    Holographic imaging filters are required to have high Bragg selectivity, namely, narrow angular and spectral bandwidth, to obtain spatial-spectral information within a three-dimensional object. In this Letter, we present the design of holographic imaging filters formed using silicon oxide nanoparticles (nano-SiO(2)) in phenanthrenquinone-poly(methyl methacrylate) (PQ-PMMA) polymer recording material. This combination offers greater Bragg selectivity and increases the diffraction efficiency of holographic filters. The holographic filters with optimized ratio of nano-SiO(2) in PQ-PMMA can significantly improve the performance of Bragg selectivity and diffraction efficiency by 53% and 16%, respectively. We present experimental results and data analysis demonstrating this technique in use for holographic spatial-spectral imaging filters.

  18. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  19. Skeletonization of Gridded Potential-Field Images

    NASA Astrophysics Data System (ADS)

    Gao, L.; Morozov, I. B.

    2012-12-01

    A new approach to skeletonization was developed for gridded potential-field data. Generally, skeletonization is a pattern-recognition technique allowing automatic recognition of near-linear features in the images, measurement of their parameters, and analyzing them for similarities. Our approach decomposes the images into arbitrarily-oriented "wavelets" characterized by positive or negative amplitudes, orientation angles, spatial dimensions, polarities, and other attributes. Orientations of the wavelets are obtained by scanning the azimuths to detect the strike direction of each anomaly. The wavelets are connected according to the similarities of these attributes, which leads to a "skeleton" map of the potential-field data. In addition, 2-D filtering is conducted concurrently with the wavelet-identification process, which allows extracting parameters of background trends and reduces the adverse effects of low-frequency background (which is often strong in potential-field maps) on skeletonization.. By correlating the neighboring wavelets, linear anomalies are identified and characterized. The advantages of this algorithm are the generality and isotropy of feature detection, as well as being specifically designed for gridded data. With several options for background-trend extraction, the stability for identification of lineaments is improved and optimized. The algorithm is also integrated in a powerful processing system which allows combining it with numerous other tools, such as filtering, computation of analytical signal, empirical mode decomposition, and various types of plotting. The method is applied to potential-field data for the Western Canada Sedimentary Basin, in a study area which extends from southern Saskatchewan into southwestern Manitoba. The target is the structure of crystalline basement beneath Phanerozoic sediments. The examples illustrate that skeletonization aid in the interpretation of complex structures at different scale lengths. The results indicate that this method is useful for identifying structures in complex geophysical images and for automatic extraction of their attributes as well as for quantitative characterization and analysis of potential-field images. Skeletonized potential-field images should also be useful for inversion.

  20. Optimal digital filtering for tremor suppression.

    PubMed

    Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R

    2000-05-01

    Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:¿tremor-suppression.com.

Top