Sample records for filtering techniques applied

  1. Technical Report Series on Global Modeling and Data Assimilation. Volume 16; Filtering Techniques on a Stretched Grid General Circulation Model

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.; Sawyer, William; Suarez, Max J. (Editor); Fox-Rabinowitz, Michael S.

    1999-01-01

    This report documents the techniques used to filter quantities on a stretched grid general circulation model. Standard high-latitude filtering techniques (e.g., using an FFT (Fast Fourier Transformations) to decompose and filter unstable harmonics at selected latitudes) applied on a stretched grid are shown to produce significant distortions of the prognostic state when used to control instabilities near the pole. A new filtering technique is developed which accurately accounts for the non-uniform grid by computing the eigenvectors and eigenfrequencies associated with the stretching. A filter function, constructed to selectively damp those modes whose associated eigenfrequencies exceed some critical value, is used to construct a set of grid-spaced weights which are shown to effectively filter without distortion. Both offline and GCM (General Circulation Model) experiments are shown using the new filtering technique. Finally, a brief examination is also made on the impact of applying the Shapiro filter on the stretched grid.

  2. CCD filter and transform techniques for interference excision

    NASA Technical Reports Server (NTRS)

    Borsuk, G. M.; Dewitt, R. N.

    1976-01-01

    The theoretical and some experimental results of a study aimed at applying CCD filter and transform techniques to the problem of interference excision within communications channels were presented. Adaptive noise (interference) suppression was achieved by the modification of received signals such that they were orthogonal to the recently measured noise field. CCD techniques were examined to develop real-time noise excision processing. They were recursive filters, circulating filter banks, transversal filter banks, an optical implementation of the chirp Z transform, and a CCD analog FFT.

  3. Techniques for noise removal and registration of TIMS data

    USGS Publications Warehouse

    Hummer-Miller, S.

    1990-01-01

    Extracting subtle differences from highly correlated thermal infrared aircraft data is possible with appropriate noise filters, constructed and applied in the spatial frequency domain. This paper discusses a heuristic approach to designing noise filters for removing high- and low-spatial frequency striping and banding. Techniques for registering thermal infrared aircraft data to a topographic base using Thematic Mapper data are presented. The noise removal and registration techniques are applied to TIMS thermal infrared aircraft data. -Author

  4. Spectral analysis and filtering techniques in digital spatial data processing

    USGS Publications Warehouse

    Pan, Jeng-Jong

    1989-01-01

    A filter toolbox has been developed at the EROS Data Center, US Geological Survey, for retrieving or removing specified frequency information from two-dimensional digital spatial data. This filter toolbox provides capabilities to compute the power spectrum of a given data and to design various filters in the frequency domain. Three types of filters are available in the toolbox: point filter, line filter, and area filter. Both the point and line filters employ Gaussian-type notch filters, and the area filter includes the capabilities to perform high-pass, band-pass, low-pass, and wedge filtering techniques. These filters are applied for analyzing satellite multispectral scanner data, airborne visible and infrared imaging spectrometer (AVIRIS) data, gravity data, and the digital elevation models (DEM) data. -from Author

  5. Application of filtering techniques in preprocessing magnetic data

    NASA Astrophysics Data System (ADS)

    Liu, Haijun; Yi, Yongping; Yang, Hongxia; Hu, Guochuang; Liu, Guoming

    2010-08-01

    High precision magnetic exploration is a popular geophysical technique for its simplicity and its effectiveness. The explanation in high precision magnetic exploration is always a difficulty because of the existence of noise and disturbance factors, so it is necessary to find an effective preprocessing method to get rid of the affection of interference factors before further processing. The common way to do this work is by filtering. There are many kinds of filtering methods. In this paper we introduced in detail three popular kinds of filtering techniques including regularized filtering technique, sliding averages filtering technique, compensation smoothing filtering technique. Then we designed the work flow of filtering program based on these techniques and realized it with the help of DELPHI. To check it we applied it to preprocess magnetic data of a certain place in China. Comparing the initial contour map with the filtered contour map, we can see clearly the perfect effect our program. The contour map processed by our program is very smooth and the high frequency parts of data are disappeared. After filtering, we separated useful signals and noisy signals, minor anomaly and major anomaly, local anomaly and regional anomaly. It made us easily to focus on the useful information. Our program can be used to preprocess magnetic data. The results showed the effectiveness of our program.

  6. Applications of Kalman filtering to real-time trace gas concentration measurements

    NASA Technical Reports Server (NTRS)

    Leleux, D. P.; Claps, R.; Chen, W.; Tittel, F. K.; Harman, T. L.

    2002-01-01

    A Kalman filtering technique is applied to the simultaneous detection of NH3 and CO2 with a diode-laser-based sensor operating at 1.53 micrometers. This technique is developed for improving the sensitivity and precision of trace gas concentration levels based on direct overtone laser absorption spectroscopy in the presence of various sensor noise sources. Filter performance is demonstrated to be adaptive to real-time noise and data statistics. Additionally, filter operation is successfully performed with dynamic ranges differing by three orders of magnitude. Details of Kalman filter theory applied to the acquired spectroscopic data are discussed. The effectiveness of this technique is evaluated by performing NH3 and CO2 concentration measurements and utilizing it to monitor varying ammonia and carbon dioxide levels in a bioreactor for water reprocessing, located at the NASA-Johnson Space Center. Results indicate a sensitivity enhancement of six times, in terms of improved minimum detectable absorption by the gas sensor.

  7. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum.

    PubMed

    Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M

    2015-06-21

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.

  8. SU-E-J-261: The Importance of Appropriate Image Preprocessing to Augment the Information of Radiomics Image Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L; Fried, D; Fave, X

    Purpose: To investigate how different image preprocessing techniques, their parameters, and the different boundary handling techniques can augment the information of features and improve feature’s differentiating capability. Methods: Twenty-seven NSCLC patients with a solid tumor volume and no visually obvious necrotic regions in the simulation CT images were identified. Fourteen of these patients had a necrotic region visible in their pre-treatment PET images (necrosis group), and thirteen had no visible necrotic region in the pre-treatment PET images (non-necrosis group). We investigated how image preprocessing can impact the ability of radiomics image features extracted from the CT to differentiate between twomore » groups. It is expected the histogram in the necrosis group is more negatively skewed, and the uniformity from the necrosis group is less. Therefore, we analyzed two first order features, skewness and uniformity, on the image inside the GTV in the intensity range [−20HU, 180HU] under the combination of several image preprocessing techniques: (1) applying the isotropic Gaussian or anisotropic diffusion smoothing filter with a range of parameter(Gaussian smoothing: size=11, sigma=0:0.1:2.3; anisotropic smoothing: iteration=4, kappa=0:10:110); (2) applying the boundaryadapted Laplacian filter; and (3) applying the adaptive upper threshold for the intensity range. A 2-tailed T-test was used to evaluate the differentiating capability of CT features on pre-treatment PT necrosis. Result: Without any preprocessing, no differences in either skewness or uniformity were observed between two groups. After applying appropriate Gaussian filters (sigma>=1.3) or anisotropic filters(kappa >=60) with the adaptive upper threshold, skewness was significantly more negative in the necrosis group(p<0.05). By applying the boundary-adapted Laplacian filtering after the appropriate Gaussian filters (0.5 <=sigma<=1.1) or anisotropic filters(20<=kappa <=50), the uniformity was significantly lower in the necrosis group (p<0.05). Conclusion: Appropriate selection of image preprocessing techniques allows radiomics features to extract more useful information and thereby improve prediction models based on these features.« less

  9. Nonlinear filtering techniques for noisy geophysical data: Using big data to predict the future

    NASA Astrophysics Data System (ADS)

    Moore, J. M.

    2014-12-01

    Chaos is ubiquitous in physical systems. Within the Earth sciences it is readily evident in seismology, groundwater flows and drilling data. Models and workflows have been applied successfully to understand and even to predict chaotic systems in other scientific fields, including electrical engineering, neurology and oceanography. Unfortunately, the high levels of noise characteristic of our planet's chaotic processes often render these frameworks ineffective. This contribution presents techniques for the reduction of noise associated with measurements of nonlinear systems. Our ultimate aim is to develop data assimilation techniques for forward models that describe chaotic observations, such as episodic tremor and slip (ETS) events in fault zones. A series of nonlinear filters are presented and evaluated using classical chaotic systems. To investigate whether the filters can successfully mitigate the effect of noise typical of Earth science, they are applied to sunspot data. The filtered data can be used successfully to forecast sunspot evolution for up to eight years (see figure).

  10. SU-E-I-37: Low-Dose Real-Time Region-Of-Interest X-Ray Fluoroscopic Imaging with a GPU-Accelerated Spatially Different Bilateral Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, H; Lee, J; Pua, R

    2014-06-01

    Purpose: The purpose of our study is to reduce imaging radiation dose while maintaining image quality of region of interest (ROI) in X-ray fluoroscopy. A low-dose real-time ROI fluoroscopic imaging technique which includes graphics-processing-unit- (GPU-) accelerated image processing for brightness compensation and noise filtering was developed in this study. Methods: In our ROI fluoroscopic imaging, a copper filter is placed in front of the X-ray tube. The filter contains a round aperture to reduce radiation dose to outside of the aperture. To equalize the brightness difference between inner and outer ROI regions, brightness compensation was performed by use of amore » simple weighting method that applies selectively to the inner ROI, the outer ROI, and the boundary zone. A bilateral filtering was applied to the images to reduce relatively high noise in the outer ROI images. To speed up the calculation of our technique for real-time application, the GPU-acceleration was applied to the image processing algorithm. We performed a dosimetric measurement using an ion-chamber dosimeter to evaluate the amount of radiation dose reduction. The reduction of calculation time compared to a CPU-only computation was also measured, and the assessment of image quality in terms of image noise and spatial resolution was conducted. Results: More than 80% of dose was reduced by use of the ROI filter. The reduction rate depended on the thickness of the filter and the size of ROI aperture. The image noise outside the ROI was remarkably reduced by the bilateral filtering technique. The computation time for processing each frame image was reduced from 3.43 seconds with single CPU to 9.85 milliseconds with GPU-acceleration. Conclusion: The proposed technique for X-ray fluoroscopy can substantially reduce imaging radiation dose to the patient while maintaining image quality particularly in the ROI region in real-time.« less

  11. Theatre Ballistic Missile Defense-Multisensor Fusion, Targeting and Tracking Techniques

    DTIC Science & Technology

    1998-03-01

    Washington, D.C., 1994. 8. Brown , R., and Hwang , P., Introduction to Random Signals and Applied Kaiman Filtering, Third Edition, John Wiley and Sons...C. ADDING MEASUREMENT NOISE 15 III. EXTENDED KALMAN FILTER 19 A. DISCRETE TIME KALMAN FILTER 19 B. EXTENDED KALMAN FILTER 21 C. EKF IN TARGET...tracking algorithms. 17 18 in. EXTENDED KALMAN FILTER This chapter provides background information on the development of a tracking algorithm

  12. Multi-filter spectrophotometry of quasar environments

    NASA Technical Reports Server (NTRS)

    Craven, Sally E.; Hickson, Paul; Yee, Howard K. C.

    1993-01-01

    A many-filter photometric technique for determining redshifts and morphological types, by fitting spectral templates to spectral energy distributions, has good potential for application in surveys. Despite success in studies performed on simulated data, the results have not been fully reliable when applied to real, low signal-to-noise data. We are investigating techniques to improve the fitting process.

  13. Vibrato in Singing Voice: The Link between Source-Filter and Sinusoidal Models

    NASA Astrophysics Data System (ADS)

    Arroabarren, Ixone; Carlosena, Alfonso

    2004-12-01

    The application of inverse filtering techniques for high-quality singing voice analysis/synthesis is discussed. In the context of source-filter models, inverse filtering provides a noninvasive method to extract the voice source, and thus to study voice quality. Although this approach is widely used in speech synthesis, this is not the case in singing voice. Several studies have proved that inverse filtering techniques fail in the case of singing voice, the reasons being unclear. In order to shed light on this problem, we will consider here an additional feature of singing voice, not present in speech: the vibrato. Vibrato has been traditionally studied by sinusoidal modeling. As an alternative, we will introduce here a novel noninteractive source filter model that incorporates the mechanisms of vibrato generation. This model will also allow the comparison of the results produced by inverse filtering techniques and by sinusoidal modeling, as they apply to singing voice and not to speech. In this way, the limitations of these conventional techniques, described in previous literature, will be explained. Both synthetic signals and singer recordings are used to validate and compare the techniques presented in the paper.

  14. Guenter Tulip Filter Retrieval Experience: Predictors of Successful Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turba, Ulku Cenk, E-mail: uct5d@virginia.edu; Arslan, Bulent, E-mail: ba6e@virginia.edu; Meuse, Michael, E-mail: mm5tz@virginia.edu

    We report our experience with Guenter Tulip filter placement indications, retrievals, and procedural problems, with emphasis on alternative retrieval techniques. We have identified 92 consecutive patients in whom a Guenter Tulip filter was placed and filter removal attempted. We recorded patient demographic information, filter placement and retrieval indications, procedures, standard and nonstandard filter retrieval techniques, complications, and clinical outcomes. The mean time to retrieval for those who experienced filter strut penetration was statistically significant [F(1,90) = 8.55, p = 0.004]. Filter strut(s) IVC penetration and successful retrieval were found to be statistically significant (p = 0.043). The filter hook-IVC relationshipmore » correlated with successful retrieval. A modified guidewire loop technique was applied in 8 of 10 cases where the hook appeared to penetrate the IVC wall and could not be engaged with a loop snare catheter, providing additional technical success in 6 of 8 (75%). Therefore, the total filter retrieval success increased from 88 to 95%. In conclusion, the Guenter Tulip filter has high successful retrieval rates with low rates of complication. Additional maneuvers such as a guidewire loop method can be used to improve retrieval success rates when the filter hook is endothelialized.« less

  15. Speckle noise reduction of 1-look SAR imagery

    NASA Technical Reports Server (NTRS)

    Nathan, Krishna S.; Curlander, John C.

    1987-01-01

    Speckle noise is inherent to synthetic aperture radar (SAR) imagery. Since the degradation of the image due to this noise results in uncertainties in the interpretation of the scene and in a loss of apparent resolution, it is desirable to filter the image to reduce this noise. In this paper, an adaptive algorithm based on the calculation of the local statistics around a pixel is applied to 1-look SAR imagery. The filter adapts to the nonstationarity of the image statistics since the size of the blocks is very small compared to that of the image. The performance of the filter is measured in terms of the equivalent number of looks (ENL) of the filtered image and the resulting resolution degradation. The results are compared to those obtained from different techniques applied to similar data. The local adaptive filter (LAF) significantly increases the ENL of the final image. The associated loss of resolution is also lower than that for other commonly used speckle reduction techniques.

  16. Discrete filtering techniques applied to sequential GPS range measurements

    NASA Technical Reports Server (NTRS)

    Vangraas, Frank

    1987-01-01

    The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.

  17. Photographic film image enhancement

    NASA Technical Reports Server (NTRS)

    Horner, J. L.

    1975-01-01

    A series of experiments were undertaken to assess the feasibility of defogging color film by the techniques of optical spatial filtering. A coherent optical processor was built using red, blue, and green laser light input and specially designed Fourier transformation lenses. An array of spatial filters was fabricated on black and white emulsion slides using the coherent optical processor. The technique was first applied to laboratory white light fogged film, and the results were successful. However, when the same technique was applied to some original Apollo X radiation fogged color negatives, the results showed no similar restoration. Examples of each experiment are presented and possible reasons for the lack of restoration in the Apollo films are discussed.

  18. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John M.; Herren, Kenneth A.

    2008-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  19. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Herren, Kenneth

    2007-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  20. Discrete square root filtering - A survey of current techniques.

    NASA Technical Reports Server (NTRS)

    Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.

    1971-01-01

    Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.

  1. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  2. A Comparative Study of Different Deblurring Methods Using Filters

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Kavitha, S.

    2011-12-01

    This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.

  3. Principal Component Noise Filtering for NAST-I Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Smith, William L., Sr.

    2011-01-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.

  4. Multiscale morphological filtering for analysis of noisy and complex images

    NASA Astrophysics Data System (ADS)

    Kher, A.; Mitra, S.

    Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.

  5. Multiscale Morphological Filtering for Analysis of Noisy and Complex Images

    NASA Technical Reports Server (NTRS)

    Kher, A.; Mitra, S.

    1993-01-01

    Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.

  6. High-latitude filtering in a global grid-point model using model normal modes. [Fourier filters for synoptic weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, L. L.; Kalnay, E.; Navon, I. M.

    1985-01-01

    A normal modes expansion technique is applied to perform high latitude filtering in the GLAS fourth order global shallow water model with orography. The maximum permissible time step in the solution code is controlled by the frequency of the fastest propagating mode, which can be a gravity wave. Numerical methods are defined for filtering the data to identify the number of gravity modes to be included in the computations in order to obtain the appropriate zonal wavenumbers. The performances of the model with and without the filter, and with a time tendency and a prognostic field filter are tested with simulations of the Northern Hemisphere winter. The normal modes expansion technique is shown to leave the Rossby modes intact and permit 3-5 day predictions, a range not possible with the other high-latitude filters.

  7. Discrete square root smoothing.

    NASA Technical Reports Server (NTRS)

    Kaminski, P. G.; Bryson, A. E., Jr.

    1972-01-01

    The basic techniques applied in the square root least squares and square root filtering solutions are applied to the smoothing problem. Both conventional and square root solutions are obtained by computing the filtered solutions, then modifying the results to include the effect of all measurements. A comparison of computation requirements indicates that the square root information smoother (SRIS) is more efficient than conventional solutions in a large class of fixed interval smoothing problems.

  8. A Filter Feature Selection Method Based on MFA Score and Redundancy Excluding and It's Application to Tumor Gene Expression Data Analysis.

    PubMed

    Li, Jiangeng; Su, Lei; Pang, Zenan

    2015-12-01

    Feature selection techniques have been widely applied to tumor gene expression data analysis in recent years. A filter feature selection method named marginal Fisher analysis score (MFA score) which is based on graph embedding has been proposed, and it has been widely used mainly because it is superior to Fisher score. Considering the heavy redundancy in gene expression data, we proposed a new filter feature selection technique in this paper. It is named MFA score+ and is based on MFA score and redundancy excluding. We applied it to an artificial dataset and eight tumor gene expression datasets to select important features and then used support vector machine as the classifier to classify the samples. Compared with MFA score, t test and Fisher score, it achieved higher classification accuracy.

  9. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  10. Antarctic Atmospheric Infrasound.

    DTIC Science & Technology

    1981-11-30

    auroral infra - sonic waves and the atmospheric test of a nuclear weapon in China were all recorded and analyzed in real-time by the new system as...Detection Enhancement by a Pure State Filter, 16 February 1981 The great success of the polarization filter technique with infra - sonic data led to our...Project chronology ) 2. Summary of data collected 3. Antarctic infrasonic signals 4. Noise suppression using data-adaptive polarization filters: appli

  11. Adaptive Filtering to Enhance Noise Immunity of Impedance and Admittance Spectroscopy: Comparison with Fourier Transformation

    NASA Astrophysics Data System (ADS)

    Stupin, Daniil D.; Koniakhin, Sergei V.; Verlov, Nikolay A.; Dubina, Michael V.

    2017-05-01

    The time-domain technique for impedance spectroscopy consists of computing the excitation voltage and current response Fourier images by fast or discrete Fourier transformation and calculating their relation. Here we propose an alternative method for excitation voltage and current response processing for deriving a system impedance spectrum based on a fast and flexible adaptive filtering method. We show the equivalence between the problem of adaptive filter learning and deriving the system impedance spectrum. To be specific, we express the impedance via the adaptive filter weight coefficients. The noise-canceling property of adaptive filtering is also justified. Using the RLC circuit as a model system, we experimentally show that adaptive filtering yields correct admittance spectra and elements ratings in the high-noise conditions when the Fourier-transform technique fails. Providing the additional sensitivity of impedance spectroscopy, adaptive filtering can be applied to otherwise impossible-to-interpret time-domain impedance data. The advantages of adaptive filtering are justified with practical living-cell impedance measurements.

  12. Lunar surface chemistry: A new imaging technique

    USGS Publications Warehouse

    Andre, C.G.; Bielefeld, M.J.; Eliason, E.; Soderblom, L.A.; Adler, I.; Philpotts, J.A.

    1977-01-01

    Detailed chemical maps of the lunar surface have been constructed by applying a new weighted-filter imaging technique to Apollo 15 and Apollo 16 x-ray fluorescence data. The data quality improvement is amply demonstrated by (i) modes in the frequency distribution, representing highland and mare soil suites, which are not evident before data filtering and (ii) numerous examples of chemical variations which are correlated with small-scale (about 15 kilometer) lunar topographic features.

  13. Lunar surface chemistry - A new imaging technique

    NASA Technical Reports Server (NTRS)

    Andre, C. G.; Adler, I.; Bielefeld, M. J.; Eliason, E.; Soderblom, L. A.; Philpotts, J. A.

    1977-01-01

    Detailed chemical maps of the lunar surface have been constructed by applying a new weighted-filter imaging technique to Apollo 15 and Apollo 16 X-ray fluorescence data. The data quality improvement is amply demonstrated by (1) modes in the frequency distribution, representing highland and mare soil suites, which are not evident before data filtering, and (2) numerous examples of chemical variations which are correlated with small-scale (about 15 kilometer) lunar topographic features.

  14. Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics

    NASA Technical Reports Server (NTRS)

    Zhu, Yanqui; Cohn, Stephen E.; Todling, Ricardo

    1999-01-01

    The Kalman filter is the optimal filter in the presence of known gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions. Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz model as well as more realistic models of the means and atmosphere. A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter situations to allow for correct update of the ensemble members. The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to be quite puzzling in that results state estimates are worse than for their filter analogue. In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use the Lorenz model to test and compare the behavior of a variety of implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.

  15. Collaborative filtering to improve navigation of large radiology knowledge resources.

    PubMed

    Kahn, Charles E

    2005-06-01

    Collaborative filtering is a knowledge-discovery technique that can help guide readers to items of potential interest based on the experience of prior users. This study sought to determine the impact of collaborative filtering on navigation of a large, Web-based radiology knowledge resource. Collaborative filtering was applied to a collection of 1,168 radiology hypertext documents available via the Internet. An item-based collaborative filtering algorithm identified each document's six most closely related documents based on 248,304 page views in an 18-day period. Documents were amended to include links to their related documents, and use was analyzed over the next 5 days. The mean number of documents viewed per visit increased from 1.57 to 1.74 (P < 0.0001). Collaborative filtering can increase a radiology information resource's utilization and can improve its usefulness and ease of navigation. The technique holds promise for improving navigation of large Internet-based radiology knowledge resources.

  16. Nonlinear tuning techniques of plasmonic nano-filters

    NASA Astrophysics Data System (ADS)

    Kotb, Rehab; Ismail, Yehea; Swillam, Mohamed A.

    2015-02-01

    In this paper, a fitting model to the propagation constant and the losses of Metal-Insulator-Metal (MIM) plasmonic waveguide is proposed. Using this model, the modal characteristics of MIM plasmonic waveguide can be solved directly without solving Maxwell's equations from scratch. As a consequence, the simulation time and the computational cost that are needed to predict the response of different plasmonic structures can be reduced significantly. This fitting model is used to develop a closed form model that describes the behavior of a plasmonic nano-filter. Easy and accurate mechanisms to tune the filter are investigated and analyzed. The filter tunability is based on using a nonlinear dielectric material with Pockels or Kerr effect. The tunability is achieved by applying an external voltage or through controlling the input light intensity. The proposed nano-filter supports both red and blue shift in the resonance response depending on the type of the used non-linear material. A new approach to control the input light intensity by applying an external voltage to a previous stage is investigated. Therefore, the filter tunability to a stage that has Kerr material can be achieved by applying voltage to a previous stage that has Pockels material. Using this method, the Kerr effect can be achieved electrically instead of varying the intensity of the input source. This technique enhances the ability of the device integration for on-chip applications. Tuning the resonance wavelength with high accuracy, minimum insertion loss and high quality factor is obtained using these approaches.

  17. Study of different filtering techniques applied to spectra from airborne gamma spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilhelm, Emilien; Gutierrez, Sebastien; Reboli, Anne

    2015-07-01

    One of the features of spectra obtained by airborne gamma spectrometry is low counting statistics due to the short acquisition time (1 s) and the large source-detector distance (40 m). It leads to considerable uncertainty in radionuclide identification and determination of their respective activities from the windows method recommended by the IAEA, especially for low-level radioactivity. The present work compares the results obtained with filters in terms of errors of the filtered spectra with the window method and over the whole gamma energy range. The results are used to determine which filtering technique is the most suitable in combination withmore » some method for total stripping of the spectrum. (authors)« less

  18. Optimum constrained image restoration filters

    NASA Technical Reports Server (NTRS)

    Riemer, T. E.; Mcgillem, C. D.

    1974-01-01

    The filter was developed in Hilbert space by minimizing the radius of gyration of the overall or composite system point-spread function subject to constraints on the radius of gyration of the restoration filter point-spread function, the total noise power in the restored image, and the shape of the composite system frequency spectrum. An iterative technique is introduced which alters the shape of the optimum composite system point-spread function, producing a suboptimal restoration filter which suppresses undesirable secondary oscillations. Finally this technique is applied to multispectral scanner data obtained from the Earth Resources Technology Satellite to provide resolution enhancement. An experimental approach to the problems involving estimation of the effective scanner aperture and matching the ERTS data to available restoration functions is presented.

  19. Navigation in Difficult Environments: Multi-Sensor Fusion Techniques

    DTIC Science & Technology

    2010-03-01

    Hwang , Introduction to Random Signals and Applied Kalman Filtering, 3rd ed., John Wiley & Sons, Inc., New York, 1997. [17] J. L. Farrell, “GPS/INS...nav solution Navigation outputs Estimation of inertial errors ( Kalman filter) Error estimates Core sensor Incoming signal INS Estimates of signal...the INS drift terms is performed using the mechanism of a complementary Kalman filter. The idea is that a signal parameter can be generally

  20. Multireference adaptive noise canceling applied to the EEG.

    PubMed

    James, C J; Hagan, M T; Jones, R D; Bones, P J; Carroll, G J

    1997-08-01

    The technique of multireference adaptive noise canceling (MRANC) is applied to enhance transient nonstationarities in the electroeancephalogram (EEG), with the adaptation implemented by means of a multilayer-perception artificial neural network (ANN). The method was applied to recorded EEG segments and the performance on documented nonstationarities recorded. The results show that the neural network (nonlinear) gives an improvement in performance (i.e., signal-to-noise ratio (SNR) of the nonstationarities) compared to a linear implementation of MRANC. In both cases an improvement in the SNR was obtained. The advantage of the spatial filtering aspect of MRANC is highlighted when the performance of MRANC is compared to that of the inverse auto-regressive filtering of the EEG, a purely temporal filter.

  1. Adaptive filtering of GOCE-derived gravity gradients of the disturbing potential in the context of the space-wise approach

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sideris, Michael G.

    2017-09-01

    Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.

  2. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    NASA Astrophysics Data System (ADS)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  3. Kalman filter estimation of human pilot-model parameters

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.

    1975-01-01

    The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.

  4. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  5. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  6. The Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics

    NASA Technical Reports Server (NTRS)

    Zhu, Yanqiu; Cohn, Stephen E.; Todling, Ricardo

    1999-01-01

    The Kalman filter is the optimal filter in the presence of known Gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions (e.g., Miller 1994). Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz (1963) model as well as more realistic models of the oceans (Evensen and van Leeuwen 1996) and atmosphere (Houtekamer and Mitchell 1998). A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter equations to allow for correct update of the ensemble members (Burgers 1998). The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to quite puzzling in that results of state estimate are worse than for their filter analogue (Evensen 1997). In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use Lorenz (1963) model to test and compare the behavior of a variety implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.

  7. Integration of adaptive guided filtering, deep feature learning, and edge-detection techniques for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing

    2017-11-01

    The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.

  8. Detection of micro gap weld joint by using magneto-optical imaging and Kalman filtering compensated with RBF neural network

    NASA Astrophysics Data System (ADS)

    Gao, Xiangdong; Chen, Yuquan; You, Deyong; Xiao, Zhenlin; Chen, Xiaohui

    2017-02-01

    An approach for seam tracking of micro gap weld whose width is less than 0.1 mm based on magneto optical (MO) imaging technique during butt-joint laser welding of steel plates is investigated. Kalman filtering(KF) technology with radial basis function(RBF) neural network for weld detection by an MO sensor was applied to track the weld center position. Because the laser welding system process noises and the MO sensor measurement noises were colored noises, the estimation accuracy of traditional KF for seam tracking was degraded by the system model with extreme nonlinearities and could not be solved by the linear state-space model. Also, the statistics characteristics of noises could not be accurately obtained in actual welding. Thus, a RBF neural network was applied to the KF technique to compensate for the weld tracking errors. The neural network can restrain divergence filter and improve the system robustness. In comparison of traditional KF algorithm, the RBF with KF was not only more effectively in improving the weld tracking accuracy but also reduced noise disturbance. Experimental results showed that magneto optical imaging technique could be applied to detect micro gap weld accurately, which provides a novel approach for micro gap seam tracking.

  9. An image filtering technique for SPIDER visible tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonnesu, N., E-mail: nicola.fonnesu@igi.cnr.it; Agostini, M.; Brombin, M.

    2014-02-15

    The tomographic diagnostic developed for the beam generated in the SPIDER facility (100 keV, 50 A prototype negative ion source of ITER neutral beam injector) will characterize the two-dimensional particle density distribution of the beam. The simulations described in the paper show that instrumental noise has a large influence on the maximum achievable resolution of the diagnostic. To reduce its impact on beam pattern reconstruction, a filtering technique has been adapted and implemented in the tomography code. This technique is applied to the simulated tomographic reconstruction of the SPIDER beam, and the main results are reported.

  10. A robust spatial filtering technique for multisource localization and geoacoustic inversion.

    PubMed

    Stotts, S A

    2005-07-01

    Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.

  11. Teaching learning based optimization-functional link artificial neural network filter for mixed noise reduction from magnetic resonance image.

    PubMed

    Kumar, M; Mishra, S K

    2017-01-01

    The clinical magnetic resonance imaging (MRI) images may get corrupted due to the presence of the mixture of different types of noises such as Rician, Gaussian, impulse, etc. Most of the available filtering algorithms are noise specific, linear, and non-adaptive. There is a need to develop a nonlinear adaptive filter that adapts itself according to the requirement and effectively applied for suppression of mixed noise from different MRI images. In view of this, a novel nonlinear neural network based adaptive filter i.e. functional link artificial neural network (FLANN) whose weights are trained by a recently developed derivative free meta-heuristic technique i.e. teaching learning based optimization (TLBO) is proposed and implemented. The performance of the proposed filter is compared with five other adaptive filters and analyzed by considering quantitative metrics and evaluating the nonparametric statistical test. The convergence curve and computational time are also included for investigating the efficiency of the proposed as well as competitive filters. The simulation outcomes of proposed filter outperform the other adaptive filters. The proposed filter can be hybridized with other evolutionary technique and utilized for removing different noise and artifacts from others medical images more competently.

  12. Mapping accuracy via spectrally and structurally based filtering techniques: comparisons through visual observations

    NASA Astrophysics Data System (ADS)

    Chockalingam, Letchumanan

    2005-01-01

    The data of Gunung Ledang region of Malaysia acquired through LANDSAT are considered to map certain hydrogeolocial features. To map these significant features, image-processing tools such as contrast enhancement, edge detection techniques are employed. The advantages of these techniques over the other methods are evaluated from the point of their validity in properly isolating features of hydrogeolocial interest are discussed. As these techniques take the advantage of spectral aspects of the images, these techniques have several limitations to meet the objectives. To discuss these limitations, a morphological transformation, which generally considers the structural aspects rather than spectral aspects from the image, are applied to provide comparisons between the results derived from spectral based and the structural based filtering techniques.

  13. Frequency-selective quantitation of short-echo time 1H magnetic resonance spectra

    NASA Astrophysics Data System (ADS)

    Poullet, Jean-Baptiste; Sima, Diana M.; Van Huffel, Sabine; Van Hecke, Paul

    2007-06-01

    Accurate and efficient filtering techniques are required to suppress large nuisance components present in short-echo time magnetic resonance (MR) spectra. This paper discusses two powerful filtering techniques used in long-echo time MR spectral quantitation, the maximum-phase FIR filter (MP-FIR) and the Hankel-Lanczos Singular Value Decomposition with Partial ReOrthogonalization (HLSVD-PRO), and shows that they can be applied to their more complex short-echo time spectral counterparts. Both filters are validated and compared through extensive simulations. Their properties are discussed. In particular, the capability of MP-FIR for dealing with macromolecular components is emphasized. Although this property does not make a large difference for long-echo time MR spectra, it can be important when quantifying short-echo time spectra.

  14. Optimum filter-based discrimination of neutrons and gamma rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek

    2015-07-01

    An optimum filter-based method for discrimination of neutrons and gamma-rays in a mixed radiation field is presented. The existing filter-based implementations of discriminators require sample pulse responses in advance of the experiment run to build the filter coefficients, which makes them less practical. Our novel technique creates the coefficients during the experiment and improves their quality gradually. Applied to several sets of mixed neutron and photon signals obtained through different digitizers using stilbene scintillator, this approach is analyzed and its discrimination quality is measured. (authors)

  15. Micromachined Tunable Fabry-Perot Filters for Infrared Astronomy

    NASA Technical Reports Server (NTRS)

    Barclay, Richard; Bier, Alexander; Chen, Tina; DiCamillo, Barbara; Deming, Drake; Greenhouse, Matthew; Henry, Ross; Hewagama, Tilak; Jacobson, Mindy; Loughlin, James; hide

    2002-01-01

    Micromachined Fabry-Perot tunable filters with a large clear aperture (12.5 to 40 mm) are being developed as an optical component for wide-field imaging 1:1 spectroscopy. This program applies silicon micromachining fabrication techniques to miniaturize Fabry-Perot filters for astronomical science instruments. The filter assembly consists of a stationary etalon plate mated to a plate in which the etalon is free to move along the optical axis on silicon springs attached to a stiff silicon support ring. The moving etalon is actuated electrostatically by electrode pairs on the fixed and moving etalons. To reduce mass, both etalons are fabricated by applying optical coatings to a thin freestanding silicon nitride film held flat in drumhead tension rather than to a thick optical substrate. The design, electro-mechanical modeling, fabrication, and initial results will be discussed. The potential application of the miniature Fabry-Perot filters will be briefly discussed with emphasis on the detection of extra-solar planets.

  16. Noise reduction with complex bilateral filter.

    PubMed

    Matsumoto, Mitsuharu

    2017-12-01

    This study introduces a noise reduction technique that uses a complex bilateral filter. A bilateral filter is a nonlinear filter originally developed for images that can reduce noise while preserving edge information. It is an attractive filter and has been used in many applications in image processing. When it is applied to an acoustical signal, small-amplitude noise is reduced while the speech signal is preserved. However, a bilateral filter cannot handle noise with relatively large amplitudes owing to its innate characteristics. In this study, the noisy signal is transformed into the time-frequency domain and the filter is improved to handle complex spectra. The high-amplitude noise is reduced in the time-frequency domain via the proposed filter. The features and the potential of the proposed filter are also confirmed through experiments.

  17. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  18. Combined adaptive multiple subtraction based on optimized event tracing and extended wiener filtering

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo

    2017-06-01

    The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.

  19. A nowcasting technique based on application of the particle filter blending algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  20. Adaptive texture filtering for defect inspection in ultrasound images

    NASA Astrophysics Data System (ADS)

    Zmola, Carl; Segal, Andrew C.; Lovewell, Brian; Nash, Charles

    1993-05-01

    The use of ultrasonic imaging to analyze defects and characterize materials is critical in the development of non-destructive testing and non-destructive evaluation (NDT/NDE) tools for manufacturing. To develop better quality control and reliability in the manufacturing environment advanced image processing techniques are useful. For example, through the use of texture filtering on ultrasound images, we have been able to filter characteristic textures from highly-textured C-scan images of materials. The materials have highly regular characteristic textures which are of the same resolution and dynamic range as other important features within the image. By applying texture filters and adaptively modifying their filter response, we have examined a family of filters for removing these textures.

  1. Filtered epithermal quasi-monoenergetic neutron beams at research reactor facilities.

    PubMed

    Mansy, M S; Bashter, I I; El-Mesiry, M S; Habib, N; Adib, M

    2015-03-01

    Filtered neutron techniques were applied to produce quasi-monoenergetic neutron beams in the energy range of 1.5-133keV at research reactors. A simulation study was performed to characterize the filter components and transmitted beam lines. The filtered beams were characterized in terms of the optimal thickness of the main and additive components. The filtered neutron beams had high purity and intensity, with low contamination from the accompanying thermal emission, fast neutrons and γ-rays. A computer code named "QMNB" was developed in the "MATLAB" programming language to perform the required calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. An efficient incremental learning mechanism for tracking concept drift in spam filtering

    PubMed Central

    Sheu, Jyh-Jian; Chu, Ko-Tsung; Li, Nien-Feng; Lee, Cheng-Chi

    2017-01-01

    This research manages in-depth analysis on the knowledge about spams and expects to propose an efficient spam filtering method with the ability of adapting to the dynamic environment. We focus on the analysis of email’s header and apply decision tree data mining technique to look for the association rules about spams. Then, we propose an efficient systematic filtering method based on these association rules. Our systematic method has the following major advantages: (1) Checking only the header sections of emails, which is different from those spam filtering methods at present that have to analyze fully the email’s content. Meanwhile, the email filtering accuracy is expected to be enhanced. (2) Regarding the solution to the problem of concept drift, we propose a window-based technique to estimate for the condition of concept drift for each unknown email, which will help our filtering method in recognizing the occurrence of spam. (3) We propose an incremental learning mechanism for our filtering method to strengthen the ability of adapting to the dynamic environment. PMID:28182691

  3. Comparison of weighting techniques for acoustic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo

    2017-12-01

    To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.

  4. A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement.

    PubMed

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.

  5. Noise reduction techniques for Bayer-matrix images

    NASA Astrophysics Data System (ADS)

    Kalevo, Ossi; Rantanen, Henry

    2002-04-01

    In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.

  6. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  7. The constrained discrete-time state-dependent Riccati equation technique for uncertain nonlinear systems

    NASA Astrophysics Data System (ADS)

    Chang, Insu

    The objective of the thesis is to introduce a relatively general nonlinear controller/estimator synthesis framework using a special type of the state-dependent Riccati equation technique. The continuous time state-dependent Riccati equation (SDRE) technique is extended to discrete-time under input and state constraints, yielding constrained (C) discrete-time (D) SDRE, referred to as CD-SDRE. For the latter, stability analysis and calculation of a region of attraction are carried out. The derivation of the D-SDRE under state-dependent weights is provided. Stability of the D-SDRE feedback system is established using Lyapunov stability approach. Receding horizon strategy is used to take into account the constraints on D-SDRE controller. Stability condition of the CD-SDRE controller is analyzed by using a switched system. The use of CD-SDRE scheme in the presence of constraints is then systematically demonstrated by applying this scheme to problems of spacecraft formation orbit reconfiguration under limited performance on thrusters. Simulation results demonstrate the efficacy and reliability of the proposed CD-SDRE. The CD-SDRE technique is further investigated in a case where there are uncertainties in nonlinear systems to be controlled. First, the system stability under each of the controllers in the robust CD-SDRE technique is separately established. The stability of the closed-loop system under the robust CD-SDRE controller is then proven based on the stability of each control system comprising switching configuration. A high fidelity dynamical model of spacecraft attitude motion in 3-dimensional space is derived with a partially filled fuel tank, assumed to have the first fuel slosh mode. The proposed robust CD-SDRE controller is then applied to the spacecraft attitude control system to stabilize its motion in the presence of uncertainties characterized by the first fuel slosh mode. The performance of the robust CD-SDRE technique is discussed. Subsequently, filtering techniques are investigated by using the D-SDRE technique. Detailed derivation of the D-SDRE-based filter (D-SDREF) is provided under the assumption of Gaussian noises and the stability condition of the error signal between the measured signal and the estimated signals is proven to be input-to-state stable. For the non-Gaussian distributed noises, we propose a filter by combining the D-SDREF and the particle filter (PF), named the combined D-SDRE/PF. Two algorithms for the filtering techniques are provided. Several filtering techniques are compared with challenging numerical examples to show the reliability and efficacy of the proposed D-SDREF and the combined D-SDRE/PF.

  8. Polarization and Color Filtering Applied to Enhance Photogrammetric Measurements of Reflective Surfaces

    NASA Technical Reports Server (NTRS)

    Wells, Jeffrey M.; Jones, Thomas W.; Danehy, Paul M.

    2005-01-01

    Techniques for enhancing photogrammetric measurement of reflective surfaces by reducing noise were developed utilizing principles of light polarization. Signal selectivity with polarized light was also compared to signal selectivity using chromatic filters. Combining principles of linear cross polarization and color selectivity enhanced signal-to-noise ratios by as much as 800 fold. More typical improvements with combining polarization and color selectivity were about 100 fold. We review polarization-based techniques and present experimental results comparing the performance of traditional retroreflective targeting materials, cornercube targets returning depolarized light, and color selectivity.

  9. New method for propagating the square root covariance matrix in triangular form. [using Kalman-Bucy filter

    NASA Technical Reports Server (NTRS)

    Choe, C. Y.; Tapley, B. D.

    1975-01-01

    A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.

  10. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  11. Effect of different thickness of material filter on Tc-99m spectra and performance parameters of gamma camera

    NASA Astrophysics Data System (ADS)

    Nazifah, A.; Norhanna, S.; Shah, S. I.; Zakaria, A.

    2014-11-01

    This study aimed to investigate the effects of material filter technique on Tc-99m spectra and performance parameters of Philip ADAC forte dual head gamma camera. Thickness of material filter was selected on the basis of percentage attenuation of various gamma ray energies by different thicknesses of zinc material. A cylindrical source tank of NEMA single photon emission computed tomography (SPECT) Triple Line Source Phantom filled with water and Tc-99m radionuclide injected was used for spectra, uniformity and sensitivity measurements. Vinyl plastic tube was used as a line source for spatial resolution. Images for uniformity were reconstructed by filtered back projection method. Butterworth filter of order 5 and cut off frequency 0.35 cycles/cm was selected. Chang's attenuation correction method was applied by selecting 0.13/cm linear attenuation coefficient. Count rate was decreased with material filter from the compton region of Tc-99m energy spectrum, also from the photopeak region. Spatial resolution was improved. However, uniformity of tomographic image was equivocal, and system volume sensitivity was reduced by material filter. Material filter improved system's spatial resolution. Therefore, the technique may be used for phantom studies to improve the image quality.

  12. Gabor filter for the segmentation of skin lesions from ultrasonographic images

    NASA Astrophysics Data System (ADS)

    Petrella, Lorena I.; Gómez, W.; Alvarenga, André V.; Pereira, Wagner C. A.

    2012-05-01

    The present work applies Gabor filters bank for texture analysis of skin lesions images, obtained by ultrasound biomicroscopy. The regions affected by the lesions were differentiated from surrounding tissue in all the analyzed cases; however the accuracy of the traced borders showed some limitations in part of the images. Future steps are being contemplated, attempting to enhance the technique performance.

  13. Aimulet: a multilingual spatial optical voice card terminal for location and direction based information services

    NASA Astrophysics Data System (ADS)

    Itoh, Hideo; Lin, Xin; Kaji, Ryosaku; Niwa, Tatsuya; Nakamura, Yoshiyuki; Nishimura, Takuichi

    2006-01-01

    The National Institute of Advanced Industrial Science and Technology (AIST) in Japan has been developing Aimulet, which is a compact low-power consuming information terminal for a personal information services. Conventional Aimulet, which is called Aimulet ver. 1 or CoBIT, has features of location and direction sensitive information service device without batteries. On the other hand, the Aimulet ver. 1 has two subjects, one is multiplex and demultiplex of some contents, and another is operation under sunshine. In Former subject is of solved by the wavelength multiplex technique using LED emitter with different wavelength and dielectric optical filters. Latter subject is solved by new micro spherical solar cells with a visible-light-eliminating optical filter and a new design of light irradiation. These techniques are applied to the EXPO 2005, Aichi Japan and introduced in public. The former technique is applied on Aimulet GH, which is used in Orange Hall of the Global House, scientific museum with a fossil of a frozen mammoth. The latter technique is applied on Aimulet LA, which is used in the Laurie Anderson's WALK project in the Japanese Garden.

  14. Detecting Weak Spectral Lines in Interferometric Data through Matched Filtering

    NASA Astrophysics Data System (ADS)

    Loomis, Ryan A.; Öberg, Karin I.; Andrews, Sean M.; Walsh, Catherine; Czekala, Ian; Huang, Jane; Rosenfeld, Katherine A.

    2018-04-01

    Modern radio interferometers enable observations of spectral lines with unprecedented spatial resolution and sensitivity. In spite of these technical advances, many lines of interest are still at best weakly detected and therefore necessitate detection and analysis techniques specialized for the low signal-to-noise ratio (S/N) regime. Matched filters can leverage knowledge of the source structure and kinematics to increase sensitivity of spectral line observations. Application of the filter in the native Fourier domain improves S/N while simultaneously avoiding the computational cost and ambiguities associated with imaging, making matched filtering a fast and robust method for weak spectral line detection. We demonstrate how an approximate matched filter can be constructed from a previously observed line or from a model of the source, and we show how this filter can be used to robustly infer a detection significance for weak spectral lines. When applied to ALMA Cycle 2 observations of CH3OH in the protoplanetary disk around TW Hya, the technique yields a ≈53% S/N boost over aperture-based spectral extraction methods, and we show that an even higher boost will be achieved for observations at higher spatial resolution. A Python-based open-source implementation of this technique is available under the MIT license at http://github.com/AstroChem/VISIBLE.

  15. Comparison of edge detection techniques for M7 subtype Leukemic cell in terms of noise filters and threshold value

    NASA Astrophysics Data System (ADS)

    Salam, Afifah Salmi Abdul; Isa, Mohd. Nazrin Md.; Ahmad, Muhammad Imran; Che Ismail, Rizalafande

    2017-11-01

    This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection). Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML) is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.

  16. Multi-band transmission color filters for multi-color white LEDs based visible light communication

    NASA Astrophysics Data System (ADS)

    Wang, Qixia; Zhu, Zhendong; Gu, Huarong; Chen, Mengzhu; Tan, Qiaofeng

    2017-11-01

    Light-emitting diodes (LEDs) based visible light communication (VLC) can provide license-free bands, high data rates, and high security levels, which is a promising technique that will be extensively applied in future. Multi-band transmission color filters with enough peak transmittance and suitable bandwidth play a pivotal role for boosting signal-noise-ratio in VLC systems. In this paper, multi-band transmission color filters with bandwidth of dozens nanometers are designed by a simple analytical method. Experiment results of one-dimensional (1D) and two-dimensional (2D) tri-band color filters demonstrate the effectiveness of the multi-band transmission color filters and the corresponding analytical method.

  17. Material characterization and defect inspection in ultrasound images

    NASA Astrophysics Data System (ADS)

    Zmola, Carl; Segal, Andrew C.; Lovewell, Brian; Mahdavieh, Jacob; Ross, Joseph; Nash, Charles

    1992-08-01

    The use of ultrasonic imaging to analyze defects and characterize materials is critical in the development of non-destructive testing and non-destructive evaluation (NDT/NDE) tools for manufacturing. To develop better quality control and reliability in the manufacturing environment advanced image processing techniques are useful. For example, through the use of texture filtering on ultrasound images, we have been able to filter characteristic textures from highly textured C-scan images of materials. The materials have highly regular characteristic textures which are of the same resolution and dynamic range as other important features within the image. By applying texture filters and adaptively modifying their filter response, we have examined a family of filters for removing these textures.

  18. Encapsulation of the UV filters ethylhexyl methoxycinnamate and butyl methoxydibenzoylmethane in lipid microparticles: effect on in vivo human skin permeation.

    PubMed

    Scalia, S; Mezzena, M; Ramaccini, D

    2011-01-01

    Lipid microparticles loaded with the UVB filter ethylhexyl methoxycinnamate (EHMC) and the UVA filter butyl methoxydibenzoylmethane (BMDBM) were evaluated for their effect on the sunscreen agent's percutaneous penetration. Microparticles loaded with EHMC or BMDBM were prepared by the melt emulsification technique using stearic acid or glyceryl behenate as lipidic material, respectively, and hydrogenate phosphatidylcholine as the surfactant. Nonencapsulated BMDBM and EHMC in conjunction with blank microparticles or equivalent amounts of the 2 UV filters loaded in the lipid microparticles were introduced into oil-in-water emulsions and applied to human volunteers. Skin penetration was investigated in vivo by the tape-stripping technique. For the cream with the nonencapsulated sunscreen agents, the percentages of the applied dose diffused into the stratum corneum were 32.4 ± 4.1% and 30.3 ± 3.3% for EHMC and BMDBM, respectively. A statistically significant reduction in the in vivo skin penetration to 25.3 ± 5.5% for EHMC and 22.7 ± 5.4% for BMDBM was achieved by the cream containing the microencapsulated UV filters. The inhibiting effect on permeation attained by the lipid microparticles was more marked (45-56.3% reduction) in the deeper stratum corneum layers. The reduced percutaneous penetration of BMDBM and EHMC achieved by the lipid microparticles should preserve the UV filter efficacy and limit potential toxicological risks. Copyright © 2011 S. Karger AG, Basel.

  19. Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.

    PubMed

    Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H

    2013-05-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.

  20. Kalman Filter Techniques for Accelerated Cartesian Dynamic Cardiac Imaging

    PubMed Central

    Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.

    2012-01-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories, because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and SNR. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. PMID:22926804

  1. Anomaly and signature filtering improve classifier performance for detection of suspicious access to EHRs.

    PubMed

    Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila

    2011-01-01

    Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10(-6). Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful.

  2. Anomaly and Signature Filtering Improve Classifier Performance For Detection Of Suspicious Access To EHRs

    PubMed Central

    Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila

    2011-01-01

    Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10−6. Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful. PMID:22195129

  3. Describing litho-constrained layout by a high-resolution model filter

    NASA Astrophysics Data System (ADS)

    Tsai, Min-Chun

    2008-05-01

    A novel high-resolution model (HRM) filtering technique was proposed to describe litho-constrained layouts. Litho-constrained layouts are layouts that have difficulties to pattern or are highly sensitive to process-fluctuations under current lithography technologies. HRM applies a short-wavelength (or high NA) model simulation directly on the pre-OPC, original design layout to filter out low spatial-frequency regions, and retain high spatial-frequency components which are litho-constrained. Since no OPC neither mask-synthesis steps are involved, this new technique is highly efficient in run time and can be used in design stage to detect and fix litho-constrained patterns. This method has successfully captured all the hot-spots with less than 15% overshoots on a realistic 80 mm2 full-chip M1 layout in 65nm technology node. A step by step derivation of this HRM technique is presented in this paper.

  4. A Comparative Study on Preprocessing Techniques in Diabetic Retinopathy Retinal Images: Illumination Correction and Contrast Enhancement

    PubMed Central

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940

  5. Mobile indoor localization using Kalman filter and trilateration technique

    NASA Astrophysics Data System (ADS)

    Wahid, Abdul; Kim, Su Mi; Choi, Jaeho

    2015-12-01

    In this paper, an indoor localization method based on Kalman filtered RSSI is presented. The indoor communications environment however is rather harsh to the mobiles since there is a substantial number of objects distorting the RSSI signals; fading and interference are main sources of the distortion. In this paper, a Kalman filter is adopted to filter the RSSI signals and the trilateration method is applied to obtain the robust and accurate coordinates of the mobile station. From the indoor experiments using the WiFi stations, we have found that the proposed algorithm can provide a higher accuracy with relatively lower power consumption in comparison to a conventional method.

  6. Fabrication of dense wavelength division multiplexing filters with large useful area

    NASA Astrophysics Data System (ADS)

    Lee, Cheng-Chung; Chen, Sheng-Hui; Hsu, Jin-Cherng; Kuo, Chien-Cheng

    2006-08-01

    Dense Wavelength Division Multiplexers (DWDM), a kind of narrow band-pass filter, are extremely sensitive to the optical thickness error in each composite layer. Therefore to have a large useful coating area is extreme difficult because of the uniformity problem. To enlarge the useful coating area it is necessary to improve their design and their fabrication. In this study, we discuss how the tooling factors at different positions and for different materials are related to the optical performance of the design. 100GHz DWDM filters were fabricated by E-gun evaporation with ion-assisted deposition (IAD). To improve the coating uniformity, an analysis technique called shaping tooling factor (STF) was used to analyze the deviation of the optical thickness in different materials so as to enlarge the useful coating area. Also a technique of etching the deposited layers with oxygen ions was introduced. When the above techniques were applied in the fabrication of 100 GHz DWDM filters, the uniformity was better than +/-0.002% over an area of 72 mm in diameter and better than +/-0.0006% over 20mm in diameter.

  7. An Integrated Approach for Gear Health Prognostics

    NASA Technical Reports Server (NTRS)

    He, David; Bechhoefer, Eric; Dempsey, Paula; Ma, Jinghua

    2012-01-01

    In this paper, an integrated approach for gear health prognostics using particle filters is presented. The presented method effectively addresses the issues in applying particle filters to gear health prognostics by integrating several new components into a particle filter: (1) data mining based techniques to effectively define the degradation state transition and measurement functions using a one-dimensional health index obtained by whitening transform; (2) an unbiased l-step ahead RUL estimator updated with measurement errors. The feasibility of the presented prognostics method is validated using data from a spiral bevel gear case study.

  8. Towards Polarised Antiprotons: Machine Developments for Spin-Filtering Studies

    NASA Astrophysics Data System (ADS)

    Lenisa, Paolo

    2016-02-01

    We address the commissioning of the experimental equipment and the machine studies required for the first spin-filtering experiment with protons at the COSY ring in Jülich (Germany) at a beam kinetic energy of 49.3 MeV. The implementation of a low-beta insertion made it possible to achieve beam lifetimes of 8000 s in the presence of a dense polarized hydrogen storage cell target. The developed techniques can be directly applied to antiproton machines and allow for the determination of the spin-dependent pbar-p cross sections via spin-filtering.

  9. Simultaneous Determination of Octinoxate, Oxybenzone, and Octocrylene in a Sunscreen Formulation Using Validated Spectrophotometric and Chemometric Methods.

    PubMed

    Abdel-Ghany, Maha F; Abdel-Aziz, Omar; Ayad, Miriam F; Mikawy, Neven N

    2015-01-01

    Accurate, reliable, and sensitive spectrophotometric and chemometric methods were developed for simultaneous determination of octinoxate (OMC), oxybenzone (OXY), and octocrylene (OCR) in a sunscreen formulation without prior separation steps, including derivative ratio spectra zero crossing (DRSZ), double divisor ratio spectra derivative (DDRD), mean centering ratio spectra (MCR), and partial least squares (PLS-2). With the DRSZ technique, the UV filters could be determined in the ranges of 0.5-13.0, 0.3-9.0, and 0.5-9.0 μg/mL at 265.2, 246.6, and 261.8 nm, respectively. By utilizing the DDRD technique, UV filters could be determined in the above ranges at 237.8, 241.0, and 254.2 nm, respectively. With the MCR technique, the UV filters could be determined in the above ranges at 381.7, 383.2, and 355.6 nm, respectively. The PLS-2 technique successfully quantified the examined UV filters in the ranges of 0.5-9.3, 0.3-7.1, and 0.5-6.9 μg/mL, respectively. All the methods were validated according to the International Conference on Harmonization guidelines and successfully applied to determine the UV filters in pure form, laboratory-prepared mixtures, and a sunscreen formulation. The obtained results were statistically compared with reference and reported methods of analysis for OXY, OMC, and OCR, and there were no significant differences with respect to accuracy and precision of the adopted techniques.

  10. ASCAT soil moisture data assimilation through the Ensemble Kalman Filter for improving streamflow simulation in Mediterranean catchments

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel

    2016-04-01

    Assimilation of Surface Soil Moisture (SSM) observations obtained from remote sensing techniques have been shown to improve streamflow prediction at different time scales of hydrological modeling. Different sensors and methods have been tested for their application in SSM estimation, especially in the microwave region of the electromagnetic spectrum. The available observation devices include passive microwave sensors such as the Advanced Microwave Scanning Radiometer - Earth Observation System (AMSR-E) onboard the Aqua satellite and the Soil Moisture and Ocean Salinity (SMOS) mission. On the other hand, active microwave systems include Scatterometers (SCAT) onboard the European Remote Sensing satellites (ERS-1/2) and the Advanced Scatterometer (ASCAT) onboard MetOp-A satellite. Data assimilation (DA) include different techniques that have been applied in hydrology and other fields for decades. These techniques include, among others, Kalman Filtering (KF), Variational Assimilation or Particle Filtering. From the initial KF method, different techniques were developed to suit its application to different systems. The Ensemble Kalman Filter (EnKF), extensively applied in hydrological modeling improvement, shows its capability to deal with nonlinear model dynamics without linearizing model equations, as its main advantage. The objective of this study was to investigate whether data assimilation of SSM ASCAT observations, through the EnKF method, could improve streamflow simulation of mediterranean catchments with TOPLATS hydrological complex model. The DA technique was programmed in FORTRAN, and applied to hourly simulations of TOPLATS catchment model. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) was applied on its lumped version for two mediterranean catchments of similar size, located in northern Spain (Arga, 741 km2) and central Italy (Nestore, 720 km2). The model performs a separated computation of energy and water balances. In those balances, the soil is divided into two layers, the upper Surface Zone (SZ), and the deeper Transmission Zone (TZ). In this study, the SZ depth was fixed to 5 cm, for adequate assimilation of observed data. Available data was distributed as follows: first, the model was calibrated for the 2001-2007 period; then the 2007-2010 period was used for satellite data rescaling purposes. Finally, data assimilation was applied during the validation (2010-2013) period. Application of the EnKF required the following steps: 1) rescaling of satellite data, 2) transformation of rescaled data into Soil Water Index (SWI) through a moving average filter, where a T = 9 calibrated value was applied, 3) generation of a 50 member ensemble through perturbation of inputs (rainfall and temperature) and three selected parameters, 4) validation of the ensemble through the compliance of two criteria based on ensemble's spread, mean square error and skill and, 5) Kalman Gain calculation. In this work, comparison of three satellite data rescaling techniques: 1) cumulative distribution Function (CDF) matching, 2) variance matching and 3) linear least square regression was also performed. Results obtained in this study showed slight improvements of hourly Nash-Sutcliffe Efficiency (NSE) in both catchments, with the different rescaling methods evaluated. Larger improvements were found in terms of seasonal simulated volume error reduction.

  11. A motion-compensated image filter for low-dose fluoroscopy in a real-time tumor-tracking radiotherapy system

    PubMed Central

    Miyamoto, Naoki; Ishikawa, Masayori; Sutherland, Kenneth; Suzuki, Ryusuke; Matsuura, Taeko; Toramatsu, Chie; Takao, Seishin; Nihongi, Hideaki; Shimizu, Shinichi; Umegaki, Kikuo; Shirato, Hiroki

    2015-01-01

    In the real-time tumor-tracking radiotherapy system, a surrogate fiducial marker inserted in or near the tumor is detected by fluoroscopy to realize respiratory-gated radiotherapy. The imaging dose caused by fluoroscopy should be minimized. In this work, an image processing technique is proposed for tracing a moving marker in low-dose imaging. The proposed tracking technique is a combination of a motion-compensated recursive filter and template pattern matching. The proposed image filter can reduce motion artifacts resulting from the recursive process based on the determination of the region of interest for the next frame according to the current marker position in the fluoroscopic images. The effectiveness of the proposed technique and the expected clinical benefit were examined by phantom experimental studies with actual tumor trajectories generated from clinical patient data. It was demonstrated that the marker motion could be traced in low-dose imaging by applying the proposed algorithm with acceptable registration error and high pattern recognition score in all trajectories, although some trajectories were not able to be tracked with the conventional spatial filters or without image filters. The positional accuracy is expected to be kept within ±2 mm. The total computation time required to determine the marker position is a few milliseconds. The proposed image processing technique is applicable for imaging dose reduction. PMID:25129556

  12. Real-time emergency forecasting technique for situation management systems

    NASA Astrophysics Data System (ADS)

    Kopytov, V. V.; Kharechkin, P. V.; Naumenko, V. V.; Tretyak, R. S.; Tebueva, F. B.

    2018-05-01

    The article describes the real-time emergency forecasting technique that allows increasing accuracy and reliability of forecasting results of any emergency computational model applied for decision making in situation management systems. Computational models are improved by the Improved Brown’s method applying fractal dimension to forecast short time series data being received from sensors and control systems. Reliability of emergency forecasting results is ensured by the invalid sensed data filtering according to the methods of correlation analysis.

  13. Volterra model of the parametric array loudspeaker operating at ultrasonic frequencies.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2016-11-01

    The parametric array loudspeaker (PAL) is an application of the parametric acoustic array in air, which can be applied to transmit a narrow audio beam from an ultrasonic emitter. However, nonlinear distortion is very perceptible in the audio beam. Modulation methods to reduce the nonlinear distortion are available for on-axis far-field applications. For other applications, preprocessing techniques are wanting. In order to develop a preprocessing technique with general applicability to a wide range of operating conditions, the Volterra filter is investigated as a nonlinear model of the PAL in this paper. Limitations of the standard audio-to-audio Volterra filter are elaborated. An improved ultrasound-to-ultrasound Volterra filter is proposed and empirically demonstrated to be a more generic Volterra model of the PAL.

  14. Ultra-compact UHF Band-pass Filter Designed by Archimedes Spiral Capacitor and Shorted-loaded Stubs

    NASA Astrophysics Data System (ADS)

    Peng, Lin; Jiang, Xing

    2015-01-01

    UHF microstrip band-pass filters (BPFs) that much smaller than the referred BPFs are proposed in this communication. For the designing purpose of compactness, archimedes spiral capacitor and ground-loaded stubs are utilized to enhance capacitances and inductance of a filter. Two compact BPFs denoted as BPF 1 and BPF 2 are designed by applying these techniques. The size of BPF 1 and BPF 2 are 0.062 λg × 0.056 λg and 0.047 λg × 0.043 λg, respectively, where λg are guided wavelengths of the centre frequencies of the corresponding filters. The proposed filters were constructed and measured, and the measured results are in good agreement with the simulated ones.

  15. Coronagraph Focal-Plane Phase Masks Based on Photonic Crystal Technology: Recent Progress and Observational Strategy

    NASA Technical Reports Server (NTRS)

    Murakami, Naoshi; Nishikawa, Jun; Sakamoto, Moritsugu; Ise, Akitoshi; Oka, Kazuhiko; Baba, Naoshi; Murakami, Hiroshi; Tamura, Motohide; Traub, Wesley A.; Mawet, Dimitri; hide

    2012-01-01

    Photonic crystal, an artificial periodic nanostructure of refractive indices, is one of the attractive technologies for coronagraph focal-plane masks aiming at direct imaging and characterization of terrestrial extrasolar planets. We manufactured the eight-octant phase mask (8OPM) and the vector vortex mask (VVM) very precisely using the photonic crystal technology. Fully achromatic phase-mask coronagraphs can be realized by applying appropriate polarization filters to the masks. We carried out laboratory experiments of the polarization-filtered 8OPM coronagraph using the High-Contrast Imaging Testbed (HCIT), a state-of-the-art coronagraph simulator at the Jet Propulsion Laboratory (JPL). We report the experimental results of 10-8-level contrast across several wavelengths over 10% bandwidth around 800nm. In addition, we present future prospects and observational strategy for the photonic-crystal mask coronagraphs combined with differential imaging techniques to reach higher contrast. We proposed to apply a polarization-differential imaging (PDI) technique to the VVM coronagraph, in which we built a two-channel coronagraph using polarizing beam splitters to avoid a loss of intensity due to the polarization filters. We also proposed to apply an angular-differential imaging (ADI) technique to the 8OPM coronagraph. The 8OPM/ADI mode avoids an intensity loss due to a phase transition of the mask and provides a full field of view around central stars. We present results of preliminary laboratory demonstrations of the PDI and ADI observational modes with the phase-mask coronagraphs.

  16. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  17. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    NASA Astrophysics Data System (ADS)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  18. Interference detection and correction applied to incoherent-scatter radar power spectrum measurement

    NASA Technical Reports Server (NTRS)

    Ying, W. P.; Mathews, J. D.; Rastogi, P. K.

    1986-01-01

    A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.

  19. Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) for spaceborne measurements of CO

    NASA Astrophysics Data System (ADS)

    Johnson, Brian R.; Kampe, Thomas U.; Cook, William B.; Miecznik, Grzegorz; Novelli, Paul C.; Snell, Hilary E.; Turner-Valle, Jennifer A.

    2003-11-01

    An instrument concept for an Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) has been developed for measuring tropospheric carbon monoxide (CO) from space. The concept is based upon a correlation technique similar in nature to multi-order Fabry-Perot (FP) interferometer or gas filter radiometer techniques, which simultaneously measure atmospheric emission from several infrared vibration-rotation lines of CO. Correlation techniques provide a multiplex advantage for increased throughput, high spectral resolution and selectivity necessary for profiling tropospheric CO. Use of unconventional multilayer interference filter designs leads to improvement in CO spectral line correlation compared with the traditional FP multi-order technique, approaching the theoretical performance of gas filter correlation radiometry. In this implementation, however, the gas cell is replaced with a simple, robust solid interference filter. In addition to measuring CO, the correlation filter technique can be applied to measurements of other important gases such as carbon dioxide, nitrous oxide and methane. Imaging the scene onto a 2-D detector array enables a limited range of spectral sampling owing to the field-angle dependence of the filter transmission function. An innovative anamorphic optical system provides a relatively large instrument field-of-view for imaging along the orthogonal direction across the detector array. An important advantage of the IMOFPS concept is that it is a small, low mass and high spectral resolution spectrometer having no moving parts. A small, correlation spectrometer like IMOFPS would be well suited for global observations of CO2, CO, and CH4 from low Earth or regional observations from Geostationary orbit. A prototype instrument is in development for flight demonstration on an airborne platform with potential applications to atmospheric chemistry, wild fire and biomass burning, and chemical dispersion monitoring.

  20. Applying traditional signal processing techniques to social media exploitation for situational understanding

    NASA Astrophysics Data System (ADS)

    Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.

    2016-05-01

    Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.

  1. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  2. Self-Powered Electrostatic Filter with Enhanced Photocatalytic Degradation of Formaldehyde Based on Built-in Triboelectric Nanogenerators.

    PubMed

    Feng, Yawei; Ling, Lili; Nie, Jinhui; Han, Kai; Chen, Xiangyu; Bian, Zhenfeng; Li, Hexing; Wang, Zhong Lin

    2017-12-26

    Recently, atmospheric pollution caused by particulate matter or volatile organic compounds (VOCs) has become a serious issue to threaten human health. Consequently, it is highly desirable to develop an efficient purifying technique with simple structure and low cost. In this study, by combining a triboelectric nanogenerator (TENG) and a photocatalysis technique, we demonstrated a concept of a self-powered filtering method for removing pollutants from indoor atmosphere. The photocatalyst P25 or Pt/P25 was embedded on the surface of polymer-coated stainless steel wires, and such steel wires were woven into a filtering network. A strong electric field can be induced on this filtering network by TENG, while both electrostatic adsorption effect and TENG-enhanced photocatalytic effect can be achieved. Rhodamine B (RhB) steam was selected as the pollutant for demonstration. The absorbed RhB on the filter network with TENG in 1 min was almost the same amount of absorption achieved in 15 min without using TENG. Meanwhile, the degradation of RhB was increased over 50% under the drive of TENG. Furthermore, such a device was applied for the degradation of formaldehyde, where degradation efficiency was doubled under the drive of TENG. This work extended the application for the TENG in self-powered electrochemistry, design and concept of which can be possibly applied in the field of haze governance, indoor air cleaning, and photocatalytic pollution removal for environmental protection.

  3. Bilateral filtering using the full noise covariance matrix applied to x-ray phase-contrast computed tomography.

    PubMed

    Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B

    2016-05-21

    The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.

  4. Improving Estimates Of Phase Parameters When Amplitude Fluctuates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.

    1989-01-01

    Adaptive inverse filter applied to incoming signal and noise. Time-varying inverse-filtering technique developed to improve digital estimate of phase of received carrier signal. Intended for use where received signal fluctuates in amplitude as well as in phase and signal tracked by digital phase-locked loop that keeps its phase error much smaller than 1 radian. Useful in navigation systems, reception of time- and frequency-standard signals, and possibly spread-spectrum communication systems.

  5. Entropy-based adaptive attitude estimation

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.

    2018-03-01

    Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.

  6. A Novel AMARS Technique for Baseline Wander Removal Applied to Photoplethysmogram.

    PubMed

    Timimi, Ammar A K; Ali, M A Mohd; Chellappan, K

    2017-06-01

    A new digital filter, AMARS (aligning minima of alternating random signal) has been derived using trigonometry to regulate signal pulsations inline. The pulses are randomly presented in continuous signals comprising frequency band lower than the signal's mean rate. Frequency selective filters are conventionally employed to reject frequencies undesired by specific applications. However, these conventional filters only reduce the effects of the rejected range producing a signal superimposed by some baseline wander (BW). In this work, filters of different ranges and techniques were independently configured to preprocess a photoplethysmogram, an optical biosignal of blood volume dynamics, producing wave shapes with several BWs. The AMARS application effectively removed the encountered BWs to assemble similarly aligned trends. The removal implementation was found repeatable in both ear and finger photoplethysmograms, emphasizing the importance of BW removal in biosignal processing in retaining its structural, functional and physiological properties. We also believe that AMARS may be relevant to other biological and continuous signals modulated by similar types of baseline volatility.

  7. Tracking with time-delayed data in multisensor systems

    NASA Astrophysics Data System (ADS)

    Hilton, Richard D.; Martin, David A.; Blair, William D.

    1993-08-01

    When techniques for target tracking are expanded to make use of multiple sensors in a multiplatform system, the possibility of time delayed data becomes a reality. When a discrete-time Kalman filter is applied and some of the data entering the filter are delayed, proper processing of these late data is a necessity for obtaining an optimal estimate of a target's state. If this problem is not given special care, the quality of the state estimates can be degraded relative to that quality provided by a single sensor. A negative-time update technique is developed using the criteria of minimum mean-square error (MMSE) under the constraint that only the results of the most recent update are saved. The performance of the MMSE technique is compared to that of the ad hoc approach employed in the Cooperative Engagement Capabilities (CEC) system for processing data from multiple platforms. It was discovered that the MMSE technique is a stable solution to the negative-time update problem, while the CEC technique was found to be less than desirable when used with filters designed for tracking highly maneuvering targets at relatively low data rates. The MMSE negative-time update technique was found to be a superior alternative to the existing CEC negative-time update technique.

  8. An improved design method based on polyphase components for digital FIR filters

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No

    2017-11-01

    This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.

  9. Study of slow sand filtration with backwash and the influence of the filter media on the filter recovery and cleaning.

    PubMed

    de Souza, Fernando Hymnô; Pizzolatti, Bruno Segalla; Schöntag, Juliana Marques; Sens, Maurício Luiz

    2016-01-01

    Slow sand filters are considered as a great alternative for supplying drinking water in rural and/or isolated areas where raw water that is treatable with this technique is available. Some studies used backwashing as an alternative for cleaning the slow sand filter with the goal of applying the technology in small communities, since filters that supply water to a small number of people do not require much space. In this study the influence of the effective diameter on water quality in the filters and cleaning system was evaluated. A pilot system with six filters was built: three filters were conventionally cleaned by scraping and the other three were cleaned by backwashing, each with a different effective diameter of filter medium. Most filters had an average turbidity of less than 1.0 NTU, the turbidity required at the output of the filters by the Brazilian Ministry of Health Ordinance. In the study, the filters cleaned by scraping with smaller-diameter filter beds effectively filtered water better but had worse effective production. The opposite occurs in the case of backwashed filters.

  10. Signal-Noise Identification of Magnetotelluric Signals Using Fractal-Entropy and Clustering Algorithm for Targeted De-Noising

    NASA Astrophysics Data System (ADS)

    Li, Jin; Zhang, Xian; Gong, Jinzhe; Tang, Jingtian; Ren, Zhengyong; Li, Guang; Deng, Yanli; Cai, Jin

    A new technique is proposed for signal-noise identification and targeted de-noising of Magnetotelluric (MT) signals. This method is based on fractal-entropy and clustering algorithm, which automatically identifies signal sections corrupted by common interference (square, triangle and pulse waves), enabling targeted de-noising and preventing the loss of useful information in filtering. To implement the technique, four characteristic parameters — fractal box dimension (FBD), higuchi fractal dimension (HFD), fuzzy entropy (FuEn) and approximate entropy (ApEn) — are extracted from MT time-series. The fuzzy c-means (FCM) clustering technique is used to analyze the characteristic parameters and automatically distinguish signals with strong interference from the rest. The wavelet threshold (WT) de-noising method is used only to suppress the identified strong interference in selected signal sections. The technique is validated through signal samples with known interference, before being applied to a set of field measured MT/Audio Magnetotelluric (AMT) data. Compared with the conventional de-noising strategy that blindly applies the filter to the overall dataset, the proposed method can automatically identify and purposefully suppress the intermittent interference in the MT/AMT signal. The resulted apparent resistivity-phase curve is more continuous and smooth, and the slow-change trend in the low-frequency range is more precisely reserved. Moreover, the characteristic of the target-filtered MT/AMT signal is close to the essential characteristic of the natural field, and the result more accurately reflects the inherent electrical structure information of the measured site.

  11. Methodological Considerations When Quantifying High-Intensity Efforts in Team Sport Using Global Positioning System Technology.

    PubMed

    Varley, Matthew C; Jaspers, Arne; Helsen, Werner F; Malone, James J

    2017-09-01

    Sprints and accelerations are popular performance indicators in applied sport. The methods used to define these efforts using athlete-tracking technology could affect the number of efforts reported. This study aimed to determine the influence of different techniques and settings for detecting high-intensity efforts using global positioning system (GPS) data. Velocity and acceleration data from a professional soccer match were recorded via 10-Hz GPS. Velocity data were filtered using either a median or an exponential filter. Acceleration data were derived from velocity data over a 0.2-s time interval (with and without an exponential filter applied) and a 0.3-second time interval. High-speed-running (≥4.17 m/s 2 ), sprint (≥7.00 m/s 2 ), and acceleration (≥2.78 m/s 2 ) efforts were then identified using minimum-effort durations (0.1-0.9 s) to assess differences in the total number of efforts reported. Different velocity-filtering methods resulted in small to moderate differences (effect size [ES] 0.28-1.09) in the number of high-speed-running and sprint efforts detected when minimum duration was <0.5 s and small to very large differences (ES -5.69 to 0.26) in the number of accelerations when minimum duration was <0.7 s. There was an exponential decline in the number of all efforts as minimum duration increased, regardless of filtering method, with the largest declines in acceleration efforts. Filtering techniques and minimum durations substantially affect the number of high-speed-running, sprint, and acceleration efforts detected with GPS. Changes to how high-intensity efforts are defined affect reported data. Therefore, consistency in data processing is advised.

  12. 40 CFR 141.700 - General requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Enhanced Treatment for Cryptosporidium General... drinking water regulations. The regulations in this subpart establish or extend treatment technique... requirements of this subpart for filtered systems apply to systems required by National Primary Drinking Water...

  13. 3D-FFT for Signature Detection in LWIR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medvick, Patricia A.; Lind, Michael A.; Mackey, Patrick S.

    Improvements in analysis detection exploitation are possible by applying whitened matched filtering within the Fourier domain to hyperspectral data cubes. We describe an implementation of a Three Dimensional Fast Fourier Transform Whitened Matched Filter (3DFFTMF) approach and, using several example sets of Long Wave Infra Red (LWIR) data cubes, compare the results with those from standard Whitened Matched Filter (WMF) techniques. Since the variability in shape of gaseous plumes precludes the use of spatial conformation in the matched filtering, the 3DFFTMF results were similar to those of two other WMF methods. Including a spatial low-pass filter within the Fourier spacemore » can improve signal to noise ratios and therefore improve detection limit by facilitating the mitigation of high frequency clutter. The improvement only occurs if the low-pass filter diameter is smaller than the plume diameter.« less

  14. Preconditioner-free Wiener filtering with a dense noise matrix

    NASA Astrophysics Data System (ADS)

    Huffenberger, Kevin M.

    2018-05-01

    This work extends the Elsner & Wandelt (2013) iterative method for efficient, preconditioner-free Wiener filtering to cases in which the noise covariance matrix is dense, but can be decomposed into a sum whose parts are sparse in convenient bases. The new method, which uses multiple messenger fields, reproduces Wiener-filter solutions for test problems, and we apply it to a case beyond the reach of the Elsner & Wandelt (2013) method. We compute the Wiener-filter solution for a simulated Cosmic Microwave Background (CMB) map that contains spatially varying, uncorrelated noise, isotropic 1/f noise, and large-scale horizontal stripes (like those caused by atmospheric noise). We discuss simple extensions that can filter contaminated modes or inverse-noise-filter the data. These techniques help to address complications in the noise properties of maps from current and future generations of ground-based Microwave Background experiments, like Advanced ACTPol, Simons Observatory, and CMB-S4.

  15. Optimum design of hybrid phase locked loops

    NASA Technical Reports Server (NTRS)

    Lee, P.; Yan, T.

    1981-01-01

    The design procedure of phase locked loops is described in which the analog loop filter is replaced by a digital computer. Specific design curves are given for the step and ramp input changes in phase. It is shown that the designed digital filter depends explicitly on the product of the sampling time and the noise bandwidth of the phase locked loop. This technique of optimization can be applied to the design of digital analog loops for other applications.

  16. Simulation study of accelerator based quasi-mono-energetic epithermal neutron beams for BNCT.

    PubMed

    Adib, M; Habib, N; Bashter, I I; El-Mesiry, M S; Mansy, M S

    2016-01-01

    Filtered neutron techniques were applied to produce quasi-mono-energetic neutron beams in the energy range of 1.5-7.5 keV at the accelerator port using the generated neutron spectrum from a Li (p, n) Be reaction. A simulation study was performed to characterize the filter components and transmitted beam lines. The feature of the filtered beams is detailed in terms of optimal thickness of the primary and additive components. A computer code named "QMNB-AS" was developed to carry out the required calculations. The filtered neutron beams had high purity and intensity with low contamination from the accompanying thermal, fast neutrons and γ-rays. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Bayes filter modification for drivability map estimation with observations from stereo vision

    NASA Astrophysics Data System (ADS)

    Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri

    2017-02-01

    Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.

  18. Probabilistic retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  19. Self-Normalized Photoacoustic Technique for the Quantitative Analysis of Paper Pigments

    NASA Astrophysics Data System (ADS)

    Balderas-López, J. A.; Gómez y Gómez, Y. M.; Bautista-Ramírez, M. E.; Pescador-Rojas, J. A.; Martínez-Pérez, L.; Lomelí-Mejía, P. A.

    2018-03-01

    A self-normalized photoacoustic technique was applied for quantitative analysis of pigments embedded in solids. Paper samples (filter paper, Whatman No. 1), attached with the pigment: Direct Fast Turquoise Blue GL, were used for this study. This pigment is a blue dye commonly used in industry to dye paper and other fabrics. The optical absorption coefficient, at a wavelength of 660 nm, was measured for this pigment at various concentrations in the paper substrate. It was shown that Beer-Lambert model for light absorption applies well for pigments in solid substrates and optical absorption coefficients as large as 220 cm^{-1} can be measured with this photoacoustic technique.

  20. An ultrasensitive bio-surrogate for nanoporous filter membrane performance metrology directed towards contamination control in microlithography applications

    NASA Astrophysics Data System (ADS)

    Ahmad, Farhan; Mish, Barbara; Qiu, Jian; Singh, Amarnauth; Varanasi, Rao; Bedford, Eilidh; Smith, Martin

    2016-03-01

    Contamination tolerances in semiconductor manufacturing processes have changed dramatically in the past two decades, reaching below 20 nm according to the guidelines of the International Technology Roadmap for Semiconductors. The move to narrower line widths drives the need for innovative filtration technologies that can achieve higher particle/contaminant removal performance resulting in cleaner process fluids. Nanoporous filter membrane metrology tools that have been the workhorse over the past decade are also now reaching limits. For example, nanoparticle (NP) challenge testing is commonly applied for assessing particle retention performance of filter membranes. Factors such as high NP size dispersity, low NP detection sensitivity, and high NP particle-filter affinity impose challenges in characterizing the next generation of nanoporous filter membranes. We report a novel bio-surrogate, 5 nm DNA-dendrimer conjugate for evaluating particle retention performance of nanoporous filter membranes. A technique capable of single molecule detection is employed to detect sparse concentration of conjugate in filter permeate, providing >1000- fold higher detection sensitivity than any existing 5 nm-sized particle enumeration technique. This bio-surrogate also offers narrow size distribution, high stability and chemical tunability. This bio-surrogate can discriminate various sub-15 nm pore-rated nanoporous filter membranes based on their particle retention performance. Due to high bio-surrogate detection sensitivity, a lower challenge concentration of bio-surrogate (as compared to other NPs of this size) can be used for filter testing, providing a better representation of customer applications. This new method should provide better understanding of the next generation filter membranes for removing defect-causing contaminants from lithography processes.

  1. Stability Study of Sunscreens with Free and Encapsulated UV Filters Contained in Plastic Packaging

    PubMed Central

    Briasco, Benedetta; Capra, Priscilla; Mannucci, Barbara; Perugini, Paola

    2017-01-01

    Sunscreens play a fundamental role in skin cancer prevention and in protection against photo-aging. UV filters are often photo-unstable, especially in relation to their vehicles and, being lipophilic substances, they are able to interact with plastic packaging. Finally, UV filter stability can be significantly affected by the routine use of the product at high temperatures. This work aims to study the stability of sunscreen formulations in polyethylene packaging. Butyl methoxydibenzoylmethane and octocrylene, both in a free form and as encapsulated filters were chosen as UV filters. Stability evaluations were performed both in the packaging and on the formulations. Moreover, a further two non-destructive techniques, near-infrared (NIR) spectroscopy and a multiple light scattering technique, were also used to evaluate the stability of the formulation. Results demonstrated clearly that all of the pack underwent significant changes in its elastic/plastic behavior and in external color after solar irradiation. From the evaluation of the extractable profile of untreated and treated packaging material an absorption of 2-phenoxyethanol and octocrylene were shown. In conclusion, the results highlighted clearly that a reduction of the UV filter in the formulation packed in high-density polyethylene/low-density polyethylene (HDPE/LDPE) material can occur over time, reducing the protective effect of the product when applied to the skin. PMID:28561775

  2. Stability Study of Sunscreens with Free and Encapsulated UV Filters Contained in Plastic Packaging.

    PubMed

    Briasco, Benedetta; Capra, Priscilla; Mannucci, Barbara; Perugini, Paola

    2017-05-31

    Sunscreens play a fundamental role in skin cancer prevention and in protection against photo-aging. UV filters are often photo-unstable, especially in relation to their vehicles and, being lipophilic substances, they are able to interact with plastic packaging. Finally, UV filter stability can be significantly affected by the routine use of the product at high temperatures. This work aims to study the stability of sunscreen formulations in polyethylene packaging. Butyl methoxydibenzoylmethane and octocrylene, both in a free form and as encapsulated filters were chosen as UV filters. Stability evaluations were performed both in the packaging and on the formulations. Moreover, a further two non-destructive techniques, near-infrared (NIR) spectroscopy and a multiple light scattering technique, were also used to evaluate the stability of the formulation. Results demonstrated clearly that all of the pack underwent significant changes in its elastic/plastic behavior and in external color after solar irradiation. From the evaluation of the extractable profile of untreated and treated packaging material an absorption of 2-phenoxyethanol and octocrylene were shown. In conclusion, the results highlighted clearly that a reduction of the UV filter in the formulation packed in high-density polyethylene/low-density polyethylene (HDPE/LDPE) material can occur over time, reducing the protective effect of the product when applied to the skin.

  3. Ground roll attenuation using polarization analysis in the t-f-k domain

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, Y.

    2017-07-01

    S waves travel slower than P waves and have a lower dominant frequency. Therefore, applying common techniques such as time-frequency filtering and f-k filtering to separate S waves from ground roll is difficult because ground roll is also characterized by slow velocity and low frequency. In this study, we present a method for attenuating ground roll using a polarization filtering method based on the t-f-k transform. We describe the particle motion of the waves by complex vector signals. Each pair of frequency components, whose frequencies have the same absolute value but different signs, of the complex signal indicate an elliptical or linear motion. The polarization parameters of the elliptical or linear motion are explicitly related to the two Fourier coefficients. We then extend these concepts to the t-f-k domain and propose a polarization filtering method for ground roll attenuation based on the t-f-k transform. The proposed approach can define automatically the time-varying reject zones on the f-k panel at different times as a function of the reciprocal ellipticity. Four attributes, time, frequency, apparent velocity and polarization are used to identify and extract the ground roll simultaneously. Thus, the ground roll and body waves can be separated as long as they are dissimilar in one of these attributes. We compare our method with commonly used filtering techniques by applying the methods to synthetic and real seismic data. The results indicate that our method can attenuate ground roll while preserving body waves more effectively than the other methods.

  4. Seismic imaging of the Waltham Canyon fault, California: comparison of ray‐theoretical and Fresnel volume prestack depth migration

    USGS Publications Warehouse

    Bauer, Klaus; Ryberg, Trond; Fuis, Gary S.; Lüth, Stefan

    2013-01-01

    Near‐vertical faults can be imaged using reflected refractions identified in controlled‐source seismic data. Often theses phases are observed on a few neighboring shot or receiver gathers, resulting in a low‐fold data set. Imaging can be carried out with Kirchhoff prestack depth migration in which migration noise is suppressed by constructive stacking of large amounts of multifold data. Fresnel volume migration can be used for low‐fold data without severe migration noise, as the smearing along isochrones is limited to the first Fresnel zone around the reflection point. We developed a modified Fresnel volume migration technique to enhance imaging of steep faults and to suppress noise and undesired coherent phases. The modifications include target‐oriented filters to separate reflected refractions from steep‐dipping faults and reflections with hyperbolic moveout. Undesired phases like multiple reflections, mode conversions, direct P and S waves, and surface waves are suppressed by these filters. As an alternative approach, we developed a new prestack line‐drawing migration method, which can be considered as a proxy to an infinite frequency approximation of the Fresnel volume migration. The line‐drawing migration is not considering waveform information but requires significantly shorter computational time. Target‐oriented filters were extended by dip filters in the line‐drawing migration method. The migration methods were tested with synthetic data and applied to real data from the Waltham Canyon fault, California. The two techniques are applied best in combination, to design filters and to generate complementary images of steep faults.

  5. The design and implementation of radar clutter modelling and adaptive target detection techniques

    NASA Astrophysics Data System (ADS)

    Ali, Mohammed Hussain

    The analysis and reduction of radar clutter is investigated. Clutter is the term applied to unwanted radar reflections from land, sea, precipitation, and/or man-made objects. A great deal of useful information regarding the characteristics of clutter can be obtained by the application of frequency domain analytical methods. Thus, some considerable time was spent assessing the various techniques available and their possible application to radar clutter. In order to better understand clutter, use of a clutter model was considered desirable. There are many techniques which will enable a target to be detected in the presence of clutter. One of the most flexible of these is that of adaptive filtering. This technique was thoroughly investigated and a method for improving its efficacy was devised. The modified adaptive filter employed differential adaption times to enhance detectability. Adaptation time as a factor relating to target detectability is a new concept and was investigated in some detail. It was considered desirable to implement the theoretical work in dedicated hardware to confirm that the modified clutter model and the adaptive filter technique actually performed as predicted. The equipment produced is capable of operation in real time and provides an insight into real time DSP applications. This equipment is sufficiently rapid to produce a real time display on the actual PPI system. Finally a software package was also produced which would simulate the operation of a PPI display and thus ease the interpretation of the filter outputs.

  6. Filtering observations without the initial guess

    NASA Astrophysics Data System (ADS)

    Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.

    2017-12-01

    Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.

  7. Absorption/transmission measurements of PSAP particle-laden filters from the Biomass Burning Observation Project (BBOP) field campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Presser, Cary; Nazarian, Ashot; Conny, Joseph M.

    Absorptivity measurements with a laser-heating approach, referred to as the laser-driven thermal reactor (LDTR), were carried out in the infrared and applied at ambient (laboratory) nonreacting conditions to particle-laden filters from a three-wavelength (visible) particle/soot absorption photometer (PSAP). Here, the particles were obtained during the Biomass Burning Observation Project (BBOP) field campaign. The focus of this study was to determine the particle absorption coefficient from field-campaign filter samples using the LDTR approach, and compare results with other commercially available instrumentation (in this case with the PSAP, which has been compared with numerous other optical techniques).

  8. Absorption/transmission measurements of PSAP particle-laden filters from the Biomass Burning Observation Project (BBOP) field campaign

    DOE PAGES

    Presser, Cary; Nazarian, Ashot; Conny, Joseph M.; ...

    2016-12-02

    Absorptivity measurements with a laser-heating approach, referred to as the laser-driven thermal reactor (LDTR), were carried out in the infrared and applied at ambient (laboratory) nonreacting conditions to particle-laden filters from a three-wavelength (visible) particle/soot absorption photometer (PSAP). Here, the particles were obtained during the Biomass Burning Observation Project (BBOP) field campaign. The focus of this study was to determine the particle absorption coefficient from field-campaign filter samples using the LDTR approach, and compare results with other commercially available instrumentation (in this case with the PSAP, which has been compared with numerous other optical techniques).

  9. Toward polarized antiprotons: Machine development for spin-filtering experiments

    NASA Astrophysics Data System (ADS)

    Weidemann, C.; Rathmann, F.; Stein, H. J.; Lorentz, B.; Bagdasarian, Z.; Barion, L.; Barsov, S.; Bechstedt, U.; Bertelli, S.; Chiladze, D.; Ciullo, G.; Contalbrigo, M.; Dymov, S.; Engels, R.; Gaisser, M.; Gebel, R.; Goslawski, P.; Grigoriev, K.; Guidoboni, G.; Kacharava, A.; Kamerdzhiev, V.; Khoukaz, A.; Kulikov, A.; Lehrach, A.; Lenisa, P.; Lomidze, N.; Macharashvili, G.; Maier, R.; Martin, S.; Mchedlishvili, D.; Meyer, H. O.; Merzliakov, S.; Mielke, M.; Mikirtychiants, M.; Mikirtychiants, S.; Nass, A.; Nikolaev, N. N.; Oellers, D.; Papenbrock, M.; Pesce, A.; Prasuhn, D.; Retzlaff, M.; Schleichert, R.; Schröer, D.; Seyfarth, H.; Soltner, H.; Statera, M.; Steffens, E.; Stockhorst, H.; Ströher, H.; Tabidze, M.; Tagliente, G.; Engblom, P. Thörngren; Trusov, S.; Valdau, Yu.; Vasiliev, A.; Wüstner, P.

    2015-02-01

    The paper describes the commissioning of the experimental equipment and the machine studies required for the first spin-filtering experiment with protons at a beam kinetic energy of 49.3 MeV in COSY. The implementation of a low-β insertion made it possible to achieve beam lifetimes of τb=8000 s in the presence of a dense polarized hydrogen storage-cell target of areal density dt=(5.5 ±0.2 )×1 013 atoms /cm2 . The developed techniques can be directly applied to antiproton machines and allow the determination of the spin-dependent p ¯p cross sections via spin filtering.

  10. Handling of uncertainty due to interference fringe in FT-NIR transmittance spectroscopy - Performance comparison of interference elimination techniques using glucose-water system

    NASA Astrophysics Data System (ADS)

    Beganović, Anel; Beć, Krzysztof B.; Henn, Raphael; Huck, Christian W.

    2018-05-01

    The applicability of two elimination techniques for interferences occurring in measurements with cells of short pathlength using Fourier transform near-infrared (FT-NIR) spectroscopy was evaluated. Due to the growing interest in the field of vibrational spectroscopy in aqueous biological fluids (e.g. glucose in blood), aqueous solutions of D-(+)-glucose were prepared and split into a calibration set and an independent validation set. All samples were measured with two FT-NIR spectrometers at various spectral resolutions. Moving average smoothing (MAS) and fast Fourier transform filter (FFT filter) were applied to the interference affected FT-NIR spectra in order to eliminate the interference pattern. After data pre-treatment, partial least squares regression (PLSR) models using different NIR regions were constructed using untreated (interference affected) spectra and spectra treated with MAS and FFT filter. The prediction of the independent validation set revealed information about the performance of the utilized interference elimination techniques, as well as the different NIR regions. The results showed that the combination band of water at approx. 5200 cm-1 is of great importance since its performance was superior to the one of the so-called first overtone of water at approx. 6800 cm-1. Furthermore, this work demonstrated that MAS and FFT filter are fast and easy-to-use techniques for the elimination of interference fringes in FT-NIR transmittance spectroscopy.

  11. Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Bioucas-Dias, José

    2010-04-01

    Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.

  12. High speed and high resolution interrogation of a fiber Bragg grating sensor based on microwave photonic filtering and chirped microwave pulse compression.

    PubMed

    Xu, Ou; Zhang, Jiejun; Yao, Jianping

    2016-11-01

    High speed and high resolution interrogation of a fiber Bragg grating (FBG) sensor based on microwave photonic filtering and chirped microwave pulse compression is proposed and experimentally demonstrated. In the proposed sensor, a broadband linearly chirped microwave waveform (LCMW) is applied to a single-passband microwave photonic filter (MPF) which is implemented based on phase modulation and phase modulation to intensity modulation conversion using a phase modulator (PM) and a phase-shifted FBG (PS-FBG). Since the center frequency of the MPF is a function of the central wavelength of the PS-FBG, when the PS-FBG experiences a strain or temperature change, the wavelength is shifted, which leads to the change in the center frequency of the MPF. At the output of the MPF, a filtered chirped waveform with the center frequency corresponding to the applied strain or temperature is obtained. By compressing the filtered LCMW in a digital signal processor, the resolution is improved. The proposed interrogation technique is experimentally demonstrated. The experimental results show that interrogation sensitivity and resolution as high as 1.25 ns/με and 0.8 με are achieved.

  13. Clutter Mitigation in Echocardiography Using Sparse Signal Separation

    PubMed Central

    Yavneh, Irad

    2015-01-01

    In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622

  14. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    PubMed Central

    Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  15. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  16. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  17. Apodized coupled resonator waveguides.

    PubMed

    Capmany, J; Muñoz, P; Domenech, J D; Muriel, M A

    2007-08-06

    In this paper we propose analyse the apodisation or windowing of the coupling coefficients in the unit cells of coupled resonator waveguide devices (CROWs) as a means to reduce the level of secondary sidelobes in the bandpass characteristic of their transfer functions. This technique is regularly employed in the design of digital filters and has been applied as well in the design of other photonic devices such as corrugated waveguide filters and fiber Bragg gratings. The apodisation of both Type-I and Type-II structures is discussed for several windowing functions.

  18. Peak-Seeking Control Using Gradient and Hessian Estimates

    NASA Technical Reports Server (NTRS)

    Ryan, John J.; Speyer, Jason L.

    2010-01-01

    A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.

  19. Acoustic Emission Detected by Matched Filter Technique in Laboratory Earthquake Experiment

    NASA Astrophysics Data System (ADS)

    Wang, B.; Hou, J.; Xie, F.; Ren, Y.

    2017-12-01

    Acoustic Emission in laboratory earthquake experiment is a fundamental measures to study the mechanics of the earthquake for instance to characterize the aseismic, nucleation, as well as post seismic phase or in stick slip experiment. Compared to field earthquake, AEs are generally recorded when they are beyond threshold, so some weak signals may be missing. Here we conducted an experiment on a 1.1m×1.1m granite with a 1.5m fault and 13 receivers with the same sample rate of 3MHz are placed on the surface. We adopt continues record and a matched filter technique to detect low-SNR signals. We found there are too many signals around the stick-slip and the P- arrival picked by manual may be time-consuming. So, we combined the short-term average to long-tem-average ratio (STA/LTA) technique with Autoregressive-Akaike information criterion (AR-AIC) technique to pick the arrival automatically and found mostly of the P- arrival accuracy can satisfy our demand to locate signals. Furthermore, we will locate the signals and apply a matched filter technique to detect low-SNR signals. Then, we can see if there is something interesting in laboratory earthquake experiment. Detailed and updated results will be present in the meeting.

  20. Multispectral and geomorphic studies of processed Voyager 2 images of Europa

    NASA Technical Reports Server (NTRS)

    Meier, T. A.

    1984-01-01

    High resolution images of Europa taken by the Voyager 2 spacecraft were used to study a portion of Europa's dark lineations and the major white line feature Agenor Linea. Initial image processing of images 1195J2-001 (violet filter), 1198J2-001 (blue filter), 1201J2-001 (orange filter), and 1204J2-001 (ultraviolet filter) was performed at the U.S.G.S. Branch of Astrogeology in Flagstaff, Arizona. Processing was completed through the stages of image registration and color ratio image construction. Pixel printouts were used in a new technique of linear feature profiling to compensate for image misregistration through the mapping of features on the printouts. In all, 193 dark lineation segments were mapped and profiled. The more accurate multispectral data derived by this method was plotted using a new application of the ternary diagram, with orange, blue, and violet relative spectral reflectances serving as end members. Statistical techniques were then applied to the ternary diagram plots. The image products generated at LPI were used mainly to cross-check and verify the results of the ternary diagram analysis.

  1. [Research on engine remaining useful life prediction based on oil spectrum analysis and particle filtering].

    PubMed

    Sun, Lei; Jia, Yun-xian; Cai, Li-ying; Lin, Guo-yu; Zhao, Jin-song

    2013-09-01

    The spectrometric oil analysis(SOA) is an important technique for machine state monitoring, fault diagnosis and prognosis, and SOA based remaining useful life(RUL) prediction has an advantage of finding out the optimal maintenance strategy for machine system. Because the complexity of machine system, its health state degradation process can't be simply characterized by linear model, while particle filtering(PF) possesses obvious advantages over traditional Kalman filtering for dealing nonlinear and non-Gaussian system, the PF approach was applied to state forecasting by SOA, and the RUL prediction technique based on SOA and PF algorithm is proposed. In the prediction model, according to the estimating result of system's posterior probability, its prior probability distribution is realized, and the multi-step ahead prediction model based on PF algorithm is established. Finally, the practical SOA data of some engine was analyzed and forecasted by the above method, and the forecasting result was compared with that of traditional Kalman filtering method. The result fully shows the superiority and effectivity of the

  2. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  3. Initial experience using the rigid forceps technique to remove wall-embedded IVC filters.

    PubMed

    Avery, Allan; Stephens, Maximilian; Redmond, Kendal; Harper, John

    2015-06-01

    Severely tilted and embedded inferior vena cava (IVC) filters remain the most challenging IVC filters to remove. Heavy endothelialisation over the filter hook can prevent engagement with standard snare and cone recovery techniques. The rigid forceps technique offers a way to dissect the endothelial cap and reliably retrieve severely tilted and embedded filters. By developing this technique, failed IVC retrieval rates can be significantly reduced and the optimum safety profile offered by temporary filters can be achieved. We present our initial experience with the rigid forceps technique described by Stavropoulos et al. for removing wall-embedded IVC filters. We retrospectively reviewed the medical imaging and patient records of all patients who underwent a rigid forceps filter removal over a 22-month period across two tertiary referral institutions. The rigid forceps technique had a success rate of 85% (11/13) for IVC filter removals. All filters in the series showed evidence of filter tilt and embedding of the filter hook into the IVC wall. Average filter tilt from the Z-axis was 19 degrees (range 8-56). Filters observed in the case study were either Bard G2X (n = 6) or Cook Celect (n = 7). Average filter dwell time was 421 days (range 47-1053). There were no major complications observed. The rigid forceps technique can be readily emulated and is a safe and effective technique to remove severely tilted and embedded IVC filters. The development of this technique across both institutions has increased the successful filter removal rate, with perceived benefits to the safety profile of our IVC filter programme. © 2015 The Royal Australian and New Zealand College of Radiologists.

  4. The application of the Wigner Distribution to wave type identification in finite length beams

    NASA Technical Reports Server (NTRS)

    Wahl, T. J.; Bolton, J. Stuart

    1994-01-01

    The object of the research described in this paper was to develop a means of identifying the wave-types propagating between two points in a finite length beam. It is known that different structural wave-types possess different dispersion relations: i.e., that their group speeds and the frequency dependence of their group speeds differ. As a result of those distinct dispersion relationships, different wave-types may be associated with characteristic features when structural responses are examined in the time frequency domain. Previously, the time-frequency character of analytically generated structural responses of both single element and multi-element structures were examined by using the Wigner Distribution (WD) along with filtering techniques that were designed to detect the wave-types present in the responses. In the work to be described here, the measure time-frequency response of finite length beam is examined using the WD and filtering procedures. This paper is organized as follows. First the concept of time-frequency analysis of structural responses is explained. The WD is then introduced along with a description of the implementation of a discrete version. The time-frequency filtering techniques are then presented and explained. The results of applying the WD and the filtering techniques to the analysis of a transient response is then presented.

  5. Weld quality inspection using laser-EMAT ultrasonic system and C-scan method

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Ume, I. Charles

    2014-02-01

    Laser/EMAT ultrasonic technique has attracted more and more interests in weld quality inspection because of its non-destructive and non-contact characteristics. When ultrasonic techniques are used to detect welds joining relative thin plates, the dominant ultrasonic waves present in the plates are Lamb waves, which propagate all through the thickness. Traditional Time of Flight(ToF) method loses its power. The broadband nature of laser excited ultrasound plus dispersive and multi-modal characteristic of Lamb waves make the EMAT acquired signals very complicated in this situation. Challenge rises in interpreting the received signals and establishing relationship between signal feature and weld quality. In this paper, the laser/EMAT ultrasonic technique was applied in a C-scan manner to record full wave propagation field over an area close to the weld. Then the effect of weld defect on the propagation field of Lamb waves was studied visually by watching an movie resulted from the recorded signals. This method was proved to be effective to detect the presence of hidden defect in the weld. Discrete wavelet transform(DWT) was applied to characterize the acquired ultrasonic signals and ideal band-pass filter was used to isolate wave components most sensitive to the weld defect. Different interactions with the weld defect were observed for different wave components. Thus this C-Scan method, combined with DWT and ideal band-pass filter, proved to be an effective methodology to experimentally study interactions of various laser excited Lamb Wave components with weld defect. In this work, the method was demonstrated by inspecting a hidden local incomplete penetration in weld. In fact, this method can be applied to study Lamb Wave interactions with any type of structural inconsistency. This work also proposed a ideal filtered based method to effectively reduce the total experimental time.

  6. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    PubMed Central

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  7. SU-F-I-73: Surface Dose from KV Diagnostic Beams From An On-Board Imager On a Linac Machine Using Different Imaging Techniques and Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Hossain, S; Syzek, E

    Purpose: To quantitatively investigate the surface dose deposited in patients imaged with a kV on-board-imager mounted on a radiotherapy machine using different clinical imaging techniques and filters. Methods: A high sensitivity photon diode is used to measure the surface dose on central-axis and at an off-axis-point which is mounted on the top of a phantom setup. The dose is measured for different imaging techniques that include: AP-Pelvis, AP-Head, AP-Abdomen, AP-Thorax, and Extremity. The dose measurements from these imaging techniques are combined with various filtering techniques that include: no-filter (open-field), half-fan bowtie (HF), full-fan bowtie (FF) and Cu-plate filters. The relativemore » surface dose for different imaging and filtering techniques is evaluated quantiatively by the ratio of the dose relative to the Cu-plate filter. Results: The lowest surface dose is deposited with the Cu-plate filter. The highest surface dose deposited results from open fields without filter and it is nearly a factor of 8–30 larger than the corresponding imaging technique with the Cu-plate filter. The AP-Abdomen technique delivers the largest surface dose that is nearly 2.7 times larger than the AP-Head technique. The smallest surface dose is obtained from the Extremity imaging technique. Imaging with bowtie filters decreases the surface dose by nearly 33% in comparison with the open field. The surface doses deposited with the HF or FF-bowtie filters are within few percentages. Image-quality of the radiographic images obtained from the different filtering techniques is similar because the Cu-plate eliminates low-energy photons. The HF- and FF-bowtie filters generate intensity-gradients in the radiographs which affects image-quality in the different imaging technique. Conclusion: Surface dose from kV-imaging decreases significantly with the Cu-plate and bowtie-filters compared to imaging without filters using open-field beams. The use of Cu-plate filter does not affect image-quality and may be used as the default in the different imaging techniques.« less

  8. Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement.

    PubMed

    Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon

    2017-02-24

    The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers.

  9. Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement

    PubMed Central

    Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon

    2017-01-01

    The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers. PMID:28245584

  10. Ceramic fiber reinforced filter

    DOEpatents

    Stinton, David P.; McLaughlin, Jerry C.; Lowden, Richard A.

    1991-01-01

    A filter for removing particulate matter from high temperature flowing fluids, and in particular gases, that is reinforced with ceramic fibers. The filter has a ceramic base fiber material in the form of a fabric, felt, paper of the like, with the refractory fibers thereof coated with a thin layer of a protective and bonding refractory applied by chemical vapor deposition techniques. This coating causes each fiber to be physically joined to adjoining fibers so as to prevent movement of the fibers during use and to increase the strength and toughness of the composite filter. Further, the coating can be selected to minimize any reactions between the constituents of the fluids and the fibers. A description is given of the formation of a composite filter using a felt preform of commercial silicon carbide fibers together with the coating of these fibers with pure silicon carbide. Filter efficiency approaching 100% has been demonstrated with these filters. The fiber base material is alternately made from aluminosilicate fibers, zirconia fibers and alumina fibers. Coating with Al.sub.2 O.sub.3 is also described. Advanced configurations for the composite filter are suggested.

  11. A Novel Approach to the Design of Passive Filters in Electric Grids

    NASA Astrophysics Data System (ADS)

    Filho da Costa Castro, José; Lima, Lucas Ramalho; Belchior, Fernando Nunes; Ribeiro, Paulo Fernando

    2016-12-01

    The design of shunt passive filters has been a topic of constant research since the 70's. Due to the lower cost, passive shunt filters are still considered a preferred option. This paper presents a novel approach for the placement and sizing of passive filters through ranking solutions based on the minimization of the total harmonic distortion (THDV) of the supply system rather than one specific bus, without neglecting the individual harmonic distortions. The developed method was implemented using Matlab/Simulink and applied to a test system. The results shown that is possible to minimize the total voltage harmonic distortion using a system approach during the filter selection. Additionally, since the method is mainly based on a heurist approach, it avoids the complexity associated with of use of advanced mathematical tools such as artificial intelligence techniques. The analyses contemplate a sinusoidal voltage utility and also the condition with background distortion utility.

  12. National Agricultural Library | United States Department of Agriculture

    Science.gov Websites

    filter Cornell University (2) Apply Cornell University filter Food and Nutrition Service (2) Apply Food and Nutrition Service filter U.S. Department of Energy (2) Apply U.S. Department of Energy filter U.S ; Nutrition (38) Apply Food & Nutrition filter Animals & Livestock (33) Apply Animals & Livestock

  13. Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences.

    PubMed

    Rivolo, Simone; Asrress, Kaleab N; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø; Grøndal, Anne K; Hønge, Jesper L; Kim, Won Y; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P; Lee, Jack

    2014-09-01

    Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky-Golay filter, to reduce the high frequency acquisition noise. The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%).

  14. Preprocessing of 2-Dimensional Gel Electrophoresis Images Applied to Proteomic Analysis: A Review.

    PubMed

    Goez, Manuel Mauricio; Torres-Madroñero, Maria Constanza; Röthlisberger, Sarah; Delgado-Trejos, Edilson

    2018-02-01

    Various methods and specialized software programs are available for processing two-dimensional gel electrophoresis (2-DGE) images. However, due to the anomalies present in these images, a reliable, automated, and highly reproducible system for 2-DGE image analysis has still not been achieved. The most common anomalies found in 2-DGE images include vertical and horizontal streaking, fuzzy spots, and background noise, which greatly complicate computational analysis. In this paper, we review the preprocessing techniques applied to 2-DGE images for noise reduction, intensity normalization, and background correction. We also present a quantitative comparison of non-linear filtering techniques applied to synthetic gel images, through analyzing the performance of the filters under specific conditions. Synthetic proteins were modeled into a two-dimensional Gaussian distribution with adjustable parameters for changing the size, intensity, and degradation. Three types of noise were added to the images: Gaussian, Rayleigh, and exponential, with signal-to-noise ratios (SNRs) ranging 8-20 decibels (dB). We compared the performance of wavelet, contourlet, total variation (TV), and wavelet-total variation (WTTV) techniques using parameters SNR and spot efficiency. In terms of spot efficiency, contourlet and TV were more sensitive to noise than wavelet and WTTV. Wavelet worked the best for images with SNR ranging 10-20 dB, whereas WTTV performed better with high noise levels. Wavelet also presented the best performance with any level of Gaussian noise and low levels (20-14 dB) of Rayleigh and exponential noise in terms of SNR. Finally, the performance of the non-linear filtering techniques was evaluated using a real 2-DGE image with previously identified proteins marked. Wavelet achieved the best detection rate for the real image. Copyright © 2018 Beijing Institute of Genomics, Chinese Academy of Sciences and Genetics Society of China. Production and hosting by Elsevier B.V. All rights reserved.

  15. A Bloom Filter-Powered Technique Supporting Scalable Semantic Discovery in Data Service Networks

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Shi, R.; Bao, Q.; Lee, T. J.; Ramachandran, R.

    2016-12-01

    More and more Earth data analytics software products are published onto the Internet as a service, in the format of either heavyweight WSDL service or lightweight RESTful API. Such reusable data analytics services form a data service network, which allows Earth scientists to compose (mashup) services into value-added ones. Therefore, it is important to have a technique that is capable of helping Earth scientists quickly identify appropriate candidate datasets and services in the global data service network. Most existing services discovery techniques, however, mainly rely on syntax or semantics-based service matchmaking between service requests and available services. Since the scale of the data service network is increasing rapidly, the run-time computational cost will soon become a bottleneck. To address this issue, this project presents a way of applying network routing mechanism to facilitate data service discovery in a service network, featuring scalability and performance. Earth data services are automatically annotated in Web Ontology Language for Services (OWL-S) based on their metadata, semantic information, and usage history. Deterministic Annealing (DA) technique is applied to dynamically organize annotated data services into a hierarchical network, where virtual routers are created to represent semantic local network featuring leading terms. Afterwards Bloom Filters are generated over virtual routers. A data service search request is transformed into a network routing problem in order to quickly locate candidate services through network hierarchy. A neural network-powered technique is applied to assure network address encoding and routing performance. A series of empirical study has been conducted to evaluate the applicability and effectiveness of the proposed approach.

  16. Electroencephalographic compression based on modulated filter banks and wavelet transform.

    PubMed

    Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2011-01-01

    Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.

  17. Collaborative Filtering Based on Sequential Extraction of User-Item Clusters

    NASA Astrophysics Data System (ADS)

    Honda, Katsuhiro; Notsu, Akira; Ichihashi, Hidetomo

    Collaborative filtering is a computational realization of “word-of-mouth” in network community, in which the items prefered by “neighbors” are recommended. This paper proposes a new item-selection model for extracting user-item clusters from rectangular relation matrices, in which mutual relations between users and items are denoted in an alternative process of “liking or not”. A technique for sequential co-cluster extraction from rectangular relational data is given by combining the structural balancing-based user-item clustering method with sequential fuzzy cluster extraction appraoch. Then, the tecunique is applied to the collaborative filtering problem, in which some items may be shared by several user clusters.

  18. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    PubMed

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  19. Enhancement of Seebeck coefficient in graphene superlattices by electron filtering technique

    NASA Astrophysics Data System (ADS)

    Mishra, Shakti Kumar; Kumar, Amar; Kaushik, Chetan Prakash; Dikshit, Biswaranjan

    2018-01-01

    We show theoretically that the Seebeck coefficient and the thermoelectric figure of merit can be increased by using electron filtering technique in graphene superlattice based thermoelectric devices. The average Seebeck coefficient for graphene-based thermoelectric devices is proportional to the integral of the distribution of Seebeck coefficient versus energy of electrons. The low energy electrons in the distribution curve are found to reduce the average Seebeck coefficient as their contribution is negative. We show that, with electron energy filtering technique using multiple graphene superlattice heterostructures, the low energy electrons can be filtered out and the Seebeck coefficient can be increased. The multiple graphene superlattice heterostructures can be formed by graphene superlattices with different periodic electric potentials applied above the superlattice. The overall electronic band gap of the multiple heterostructures is dependent upon the individual band gap of the graphene superlattices and can be tuned by varying the periodic electric potentials. The overall electronic band gap of the multiple heterostructures has to be properly chosen such that, the low energy electrons which cause negative Seebeck distribution in single graphene superlattice thermoelectric devices fall within the overall band gap formed by the multiple heterostructures. Although the electrical conductance is decreased in this technique reducing the thermoelectric figure of merit, the overall figure of merit is increased due to huge increase in Seebeck coefficient and its square dependency upon the Seebeck coefficient. This is an easy technique to make graphene superlattice based thermoelectric devices more efficient and has the potential to significantly improve the technology of energy harvesting and sensors.

  20. Improving surface EMG burst detection in infrahyoid muscles during swallowing using digital filters and discrete wavelet analysis.

    PubMed

    Restrepo-Agudelo, Sebastian; Roldan-Vasco, Sebastian; Ramirez-Arbelaez, Lina; Cadavid-Arboleda, Santiago; Perez-Giraldo, Estefania; Orozco-Duque, Andres

    2017-08-01

    The visual inspection is a widely used method for evaluating the surface electromyographic signal (sEMG) during deglutition, a process highly dependent of the examiners expertise. It is desirable to have a less subjective and automated technique to improve the onset detection in swallowing related muscles, which have a low signal-to-noise ratio. In this work, we acquired sEMG measured in infrahyoid muscles with high baseline noise of ten healthy adults during water swallowing tasks. Two methods were applied to find the combination of cutoff frequencies that achieve the most accurate onset detection: discrete wavelet decomposition based method and fixed steps variations of low and high cutoff frequencies of a digital bandpass filter. Teager-Kaiser Energy operator, root mean square and simple threshold method were applied for both techniques. Results show a narrowing of the effective bandwidth vs. the literature recommended parameters for sEMG acquisition. Both level 3 decomposition with mother wavelet db4 and bandpass filter with cutoff frequencies between 130 and 180Hz were optimal for onset detection in infrahyoid muscles. The proposed methodologies recognized the onset time with predictive power above 0.95, that is similar to previous findings but in larger and more superficial muscles in limbs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    PubMed

    Khan, Khan Bahadar; Khaliq, Amir A; Jalil, Abdul; Shahid, Muhammad

    2018-01-01

    The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  2. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    PubMed

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Time Domain Filtering of Resolved Images of Sgr A{sup ∗}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiokawa, Hotaka; Doeleman, Sheperd S.; Gammie, Charles F.

    The goal of the Event Horizon Telescope (EHT) is to provide spatially resolved images of Sgr A*, the source associated with the Galactic Center black hole. Because Sgr A* varies on timescales that are short compared to an EHT observing campaign, it is interesting to ask whether variability contains information about the structure and dynamics of the accretion flow. In this paper, we introduce “time-domain filtering,” a technique to filter time fluctuating images with specific temporal frequency ranges and to demonstrate the power and usage of the technique by applying it to mock millimeter wavelength images of Sgr A*. Themore » mock image data is generated from the General Relativistic Magnetohydrodynamic (GRMHD) simulation and the general relativistic ray-tracing method. We show that the variability on each line of sight is tightly correlated with a typical radius of emission. This is because disk emissivity fluctuates on a timescale of the order of the local orbital period. Time-domain filtered images therefore reflect the model dependent emission radius distribution, which is not accessible in time-averaged images. We show that, in principle, filtered data have the power to distinguish between models with different black-hole spins, different disk viewing angles, and different disk orientations in the sky.« less

  4. Time Domain Filtering of Resolved Images of Sgr A∗

    NASA Astrophysics Data System (ADS)

    Shiokawa, Hotaka; Gammie, Charles F.; Doeleman, Sheperd S.

    2017-09-01

    The goal of the Event Horizon Telescope (EHT) is to provide spatially resolved images of Sgr A*, the source associated with the Galactic Center black hole. Because Sgr A* varies on timescales that are short compared to an EHT observing campaign, it is interesting to ask whether variability contains information about the structure and dynamics of the accretion flow. In this paper, we introduce “time-domain filtering,” a technique to filter time fluctuating images with specific temporal frequency ranges and to demonstrate the power and usage of the technique by applying it to mock millimeter wavelength images of Sgr A*. The mock image data is generated from the General Relativistic Magnetohydrodynamic (GRMHD) simulation and the general relativistic ray-tracing method. We show that the variability on each line of sight is tightly correlated with a typical radius of emission. This is because disk emissivity fluctuates on a timescale of the order of the local orbital period. Time-domain filtered images therefore reflect the model dependent emission radius distribution, which is not accessible in time-averaged images. We show that, in principle, filtered data have the power to distinguish between models with different black-hole spins, different disk viewing angles, and different disk orientations in the sky.

  5. Simple and rapid detection of the porcine reproductive and respiratory syndrome virus from pig whole blood using filter paper.

    PubMed

    Inoue, Ryo; Tsukahara, Takamitsu; Sunaba, Chinatsu; Itoh, Mitsugi; Ushida, Kazunari

    2007-04-01

    The combination of Flinders Technology Associates filter papers (FTA cards) and real-time PCR was examined to establish a simple and rapid technique for the detection of porcine reproductive and respiratory syndrome virus (PRRSV) from whole pig blood. A modified live PRRS vaccine was diluted with either sterilised saline or pig whole blood, and the suspensions were applied onto the FTA cards. The real-time RT-PCR detection of PRRSV was performed directly with the samples applied to the FTA card without the RNA extraction step. Six whole blood samples from at random selected piglets in the PRRSV infected farm were also assayed in this study. The expected PCR product was successfully amplified from either saline diluted or pig whole blood diluted vaccine. The same PCR ampliocon was detected from all blood samples assayed in this study. This study suggested that the combination of an FTA card and real-time PCR is a rapid and easy technique for the detection of PRRSV. This technique can remarkably shorten the time required for PRRSV detection from whole blood and makes the procedure much easier.

  6. Measurement of indoor formaldehyde concentrations with a passive sampler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillett, R.W.; Kreibich, H.; Ayers, G.P.

    2000-05-15

    An existing Ferm-type passive sampler technique has been further developed to measure concentrations of formaldehyde gas in indoor air. Formaldehyde forms a derivative after reaction with a filter coated with 2,4-dinitrophenylhydrazine (2,4-DNPH). The formaldehyde 2,4-dinitrophenylhydrazine derivative (formaldehyde-2,4-DNPH) is extracted from the filter, and the concentration is determined by high-performance liquid chromatography. The technique has been validated against an active sampling method, and the agreement is close when the appropriate laminar boundary layer depth is applied to the passive measurement. For this technique an exposure period of 3 days is equivalent to a limit of detection of formaldehyde of 3.4 ppbvmore » and a limit of quantification of 7.6 ppbv. To test the performance of the passive samplers ambient formaldehyde measurements were carried out inside homes and in a range of workplace environments.« less

  7. Bayesian learning for spatial filtering in an EEG-based brain-computer interface.

    PubMed

    Zhang, Haihong; Yang, Huijuan; Guan, Cuntai

    2013-07-01

    Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.

  8. Development of Real Time Implementation of 5/5 Rule based Fuzzy Logic Controller Shunt Active Power Filter for Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Puhan, Pratap Sekhar; Ray, Pravat Kumar; Panda, Gayadhar

    2016-12-01

    This paper presents the effectiveness of 5/5 Fuzzy rule implementation in Fuzzy Logic Controller conjunction with indirect control technique to enhance the power quality in single phase system, An indirect current controller in conjunction with Fuzzy Logic Controller is applied to the proposed shunt active power filter to estimate the peak reference current and capacitor voltage. Current Controller based pulse width modulation (CCPWM) is used to generate the switching signals of voltage source inverter. Various simulation results are presented to verify the good behaviour of the Shunt active Power Filter (SAPF) with proposed two levels Hysteresis Current Controller (HCC). For verification of Shunt Active Power Filter in real time, the proposed control algorithm has been implemented in laboratory developed setup in dSPACE platform.

  9. Development of near infrared spectrometer for gem materials study

    NASA Astrophysics Data System (ADS)

    Jindata, W.; Meesiri, W.; Wongkokua, W.

    2015-07-01

    Most of gem materials can be characterized by infrared absorption spectroscopy. Normally, mid infrared absorption technique has been applied for investigating fundamental vibrational modes. However, for some gem materials, such as tourmaline, NIR is a better choice due to differentiation. Most commercial NIR spectrometers employ complicated dispersive grating or Fourier transform techniques. In this work, we developed a filter type NIR spectrometer with the availability of high efficiency and low-cost narrow bandpass NIR interference filters to be taught in a physics laboratory. The instrument was designed for transmission-mode configuration. A 50W halogen lamp was used as NIR source. There were fourteen NIR filters mounted on a rotatory wheel for wavelength selection ranging from 1000-1650 nm with steps of 50 nm. A 1.0 mm diameter of InGaAs photodiode was used as the detector for the spectrometer. Hence, transparent gem materials can be used as samples for experiment. Student can learn vibrational absorption spectroscopy as well as Beer-Lambert law from the development of this instrument.

  10. Automating Traceability for Generated Software Artifacts

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Green, Jeffrey

    2004-01-01

    Program synthesis automatically derives programs from specifications of their behavior. One advantage of program synthesis, as opposed to manual coding, is that there is a direct link between the specification and the derived program. This link is, however, not very fine-grained: it can be best characterized as Program is-derived- from Specification. When the generated program needs to be understood or modified, more $ne-grained linking is useful. In this paper, we present a novel technique for automatically deriving traceability relations between parts of a specification and parts of the synthesized program. The technique is very lightweight and works -- with varying degrees of success - for any process in which one artifact is automatically derived from another. We illustrate the generality of the technique by applying it to two kinds of automatic generation: synthesis of Kalman Filter programs from speci3cations using the Aut- oFilter program synthesis system, and generation of assembly language programs from C source code using the GCC C compilel: We evaluate the effectiveness of the technique in the latter application.

  11. Usability-driven pruning of large ontologies: the case of SNOMED CT.

    PubMed

    López-García, Pablo; Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan

    2012-06-01

    To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Graph-traversal heuristics provided high coverage (71-96% of terms in the test sets of discharge summaries) at the expense of subset size (17-51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24-55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available.

  12. Applying spectral unmixing and support vector machine to airborne hyperspectral imagery for detecting giant reed

    USDA-ARS?s Scientific Manuscript database

    This study evaluated linear spectral unmixing (LSU), mixture tuned matched filtering (MTMF) and support vector machine (SVM) techniques for detecting and mapping giant reed (Arundo donax L.), an invasive weed that presents a severe threat to agroecosystems and riparian areas throughout the southern ...

  13. A manual carotid compression technique to overcome difficult filter protection device retrieval during carotid artery stenting.

    PubMed

    Nii, Kouhei; Nakai, Kanji; Tsutsumi, Masanori; Aikawa, Hiroshi; Iko, Minoru; Sakamoto, Kimiya; Mitsutake, Takafumi; Eto, Ayumu; Hanada, Hayatsura; Kazekawa, Kiyoshi

    2015-01-01

    We investigated the incidence of embolic protection device retrieval difficulties at carotid artery stenting (CAS) with a closed-cell stent and demonstrated the usefulness of a manual carotid compression assist technique. Between July 2010 and October 2013, we performed 156 CAS procedures using self-expandable closed-cell stents. All procedures were performed with the aid of a filter design embolic protection device. We used FilterWire EZ in 118 procedures and SpiderFX in 38 procedures. The embolic protection device was usually retrieved by the accessory retrieval sheath after CAS. We applied a manual carotid compression technique when it was difficult to navigate the retrieval sheath through the deployed stent. We compared clinical outcomes in patients where simple retrieval was possible with patients where the manual carotid compression assisted technique was used for retrieval. Among the 156 CAS procedures, we encountered 12 (7.7%) where embolic protection device retrieval was hampered at the proximal stent terminus. Our manual carotid compression technique overcame this difficulty without eliciting neurologic events, artery dissection, or stent deformity. In patients undergoing closed-cell stent placement, embolic protection device retrieval difficulties may be encountered at the proximal stent terminus. Manual carotid compression assisted retrieval is an easy, readily available solution to overcome these difficulties. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  14. Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences

    PubMed Central

    Rivolo, Simone; Asrress, Kaleab N.; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø.; Grøndal, Anne K.; Hønge, Jesper L.; Kim, Won Y.; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P.; Lee, Jack

    2014-01-01

    Background Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky–Golay filter, to reduce the high frequency acquisition noise. Methods The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. Results and Conclusion The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%). PMID:25187852

  15. Three dimensional indoor positioning based on visible light with Gaussian mixture sigma-point particle filter technique

    NASA Astrophysics Data System (ADS)

    Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen

    2015-01-01

    Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.

  16. Inferior vena cava filter retrievals, standard and novel techniques.

    PubMed

    Kuyumcu, Gokhan; Walker, T Gregory

    2016-12-01

    The placement of an inferior vena cava (IVC) filter is a well-established management strategy for patients with venous thromboembolism (VTE) disease in whom anticoagulant therapy is either contraindicated or has failed. IVC filters may also be placed for VTE prophylaxis in certain circumstances. There has been a tremendous growth in placement of retrievable IVC filters in the past decade yet the majority of the devices are not removed. Unretrieved IVC filters have several well-known complications that increase in frequency as the filter dwell time increases. These complications include caval wall penetration, filter fracture or migration, caval thrombosis and an increased risk for lower extremity deep vein thrombosis (DVT). Difficulty is sometimes encountered when attempting to retrieve indwelling filters, mainly because of either abnormal filter positioning or endothelization of filter components that are in contact with the IVC wall, thereby causing the filter to become embedded. The length of time that a filter remains indwelling also impacts the retrieval rate, as increased dwell times are associated with more difficult retrievals. Several techniques for difficult retrievals have been described in the medical literature. These techniques range from modifications of standard retrieval techniques to much more complex interventions. Complications related to complex retrievals are more common than those associated with standard retrieval techniques. The risks of complex filter retrievals should be compared with those of life-long anticoagulation associated with an unretrieved filter, and should be individualized. This article summarizes current techniques for IVC filter retrieval from a clinical point of view, with an emphasis on advanced retrieval techniques.

  17. Inferior vena cava filter retrievals, standard and novel techniques

    PubMed Central

    Walker, T. Gregory

    2016-01-01

    The placement of an inferior vena cava (IVC) filter is a well-established management strategy for patients with venous thromboembolism (VTE) disease in whom anticoagulant therapy is either contraindicated or has failed. IVC filters may also be placed for VTE prophylaxis in certain circumstances. There has been a tremendous growth in placement of retrievable IVC filters in the past decade yet the majority of the devices are not removed. Unretrieved IVC filters have several well-known complications that increase in frequency as the filter dwell time increases. These complications include caval wall penetration, filter fracture or migration, caval thrombosis and an increased risk for lower extremity deep vein thrombosis (DVT). Difficulty is sometimes encountered when attempting to retrieve indwelling filters, mainly because of either abnormal filter positioning or endothelization of filter components that are in contact with the IVC wall, thereby causing the filter to become embedded. The length of time that a filter remains indwelling also impacts the retrieval rate, as increased dwell times are associated with more difficult retrievals. Several techniques for difficult retrievals have been described in the medical literature. These techniques range from modifications of standard retrieval techniques to much more complex interventions. Complications related to complex retrievals are more common than those associated with standard retrieval techniques. The risks of complex filter retrievals should be compared with those of life-long anticoagulation associated with an unretrieved filter, and should be individualized. This article summarizes current techniques for IVC filter retrieval from a clinical point of view, with an emphasis on advanced retrieval techniques. PMID:28123984

  18. Effects of the use of multi-layer filter on radiation exposure and the quality of upper airway radiographs compared to the traditional copper filter.

    PubMed

    Klandima, Somphan; Kruatrachue, Anchalee; Wongtapradit, Lawan; Nithipanya, Narong; Ratanaprakarn, Warangkana

    2014-06-01

    The problem of image quality in a large number of upper airway obstructed patients is the superimposition of the airway over the bone of the spine on the AP view. This problem was resolved by increasing KVp to high KVp technique and adding extra radiographic filters (copper filter) to reduce the sharpness of the bone and increase the clarity of the airway. However, this raises a concern that patients might be receiving an unnecessarily higher dose of radiation, as well as the effectiveness of the invented filter compared to the traditional filter. To evaluate the level of radiation dose that patients receive with the use of multi-layer filter compared to non-filter and to evaluate the image quality of the upper airways between using the radiographic filter (multi-layer filter) and the traditional filter (copperfilter). The attenuation curve of both filter materials was first identified. Then, both the filters were tested with Alderson Rando phantom to determine the appropriate exposure. Using the method described, a new type of filter called the multi-layer filter for imaging patients was developed. A randomized control trial was then performed to compare the effectiveness of the newly developed multi-layer filter to the copper filter. The research was conducted in patients with upper airway obstruction treated at Queen Sirikit National Institute of Child Health from October 2006 to September 2007. A total of 132 patients were divided into two groups. The experimental group used high kVp technique with multi-layer filter, while the control group used copper filter. A comparison of film interpretation between the multi-layer filter and the copper filter was made by a number of radiologists who were blinded to both to the technique and type of filter used. Patients had less radiation from undergoing the kVp technique with copper filter and multi-layer filter compared to the conventional technique, where no filter is used. Patients received approximately 65.5% less radiation dose using high kVp technique with multi-layer filter compared to the conventional technique, and 25.9% less than using the traditional copper filter 45% of the radiologists who participated in this study reported that the high kVp technique with multi-layer filter was better for diagnosing stenosis, or narrowing of the upper airways. 33% reported that, both techniques were equal, while 22% reported that the traditional copper filter allowed for better details of airway obstruction. These findings showed that the multi-layered filter was comparable to the copper filter in terms of film interpretation. Using the multi-layer filter resulted in patients receiving a lower dose of radiation, as well as similar film interpretation when compared to the traditional copper filter.

  19. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  20. Robotic Vision, Tray-Picking System Design Using Multiple, Optical Matched Filters

    NASA Astrophysics Data System (ADS)

    Leib, Kenneth G.; Mendelsohn, Jay C.; Grieve, Philip G.

    1986-10-01

    The optical correlator is applied to a robotic vision, tray-picking problem. Complex matched filters (MFs) are designed to provide sufficient optical memory for accepting any orientation of the desired part, and a multiple holographic lens (MHL) is used to increase the memory for continuous coverage. It is shown that with appropriate thresholding a small part can be selected using optical matched filters. A number of criteria are presented for optimizing the vision system. Two of the part-filled trays that Mendelsohn used are considered in this paper which is the analog (optical) expansion of his paper. Our view in this paper is that of the optical correlator as a cueing device for subsequent, finer vision techniques.

  1. Monte Carlo study for physiological interference reduction in near-infrared spectroscopy based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Sun, JinWei; Rolfe, Peter

    2010-12-01

    Near-infrared spectroscopy (NIRS) can be used as the basis of non-invasive neuroimaging that may allow the measurement of haemodynamic changes in the human brain evoked by applied stimuli. Since this technique is very sensitive, physiological interference arising from the cardiac cycle and breathing can significantly affect the signal quality. Such interference is difficult to remove by conventional techniques because it occurs not only in the extracerebral layer but also in the brain tissue itself. Previous work on this problem employing temporal filtering, spatial filtering, and adaptive filtering have exhibited good performance for recovering brain activity data in evoked response studies. However, in this study, we present a time-frequency adaptive method for physiological interference reduction based on the combination of empirical mode decomposition (EMD) and Hilbert spectral analysis (HSA). Monte Carlo simulations based on a five-layered slab model of a human adult head were implemented to evaluate our methodology. We applied an EMD algorithm to decompose the NIRS time series derived from Monte Carlo simulations into a series of intrinsic mode functions (IMFs). In order to identify the IMFs associated with symmetric interference, the extracted components were then Hilbert transformed from which the instantaneous frequencies could be acquired. By reconstructing the NIRS signal by properly selecting IMFs, we determined that the evoked brain response is effectively filtered out with even higher signal-to-noise ratio (SNR). The results obtained demonstrated that EMD, combined with HSA, can effectively separate, identify and remove the contamination from the evoked brain response obtained with NIRS using a simple single source-detector pair.

  2. A Novel Technique for Inferior Vena Cava Filter Extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Edward William, E-mail: ed.johnston@doctors.org.uk; Rowe, Luke Michael Morgan; Brookes, Jocelyn

    Inferior vena cava (IVC) filters are used to protect against pulmonary embolism in high-risk patients. Whilst the insertion of retrievable IVC filters is gaining popularity, a proportion of such devices cannot be removed using standard techniques. We describe a novel approach for IVC filter removal that involves snaring the filter superiorly along with the use of flexible forceps or laser devices to dissect the filter struts from the caval wall. This technique has used to successfully treat three patients without complications in whom standard techniques failed.

  3. Adaptive clutter rejection filters for airborne Doppler weather radar applied to the detection of low altitude windshear

    NASA Technical Reports Server (NTRS)

    Keel, Byron M.

    1989-01-01

    An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.

  4. Multiscale Embedded Gene Co-expression Network Analysis

    PubMed Central

    Song, Won-Min; Zhang, Bin

    2015-01-01

    Gene co-expression network analysis has been shown effective in identifying functional co-expressed gene modules associated with complex human diseases. However, existing techniques to construct co-expression networks require some critical prior information such as predefined number of clusters, numerical thresholds for defining co-expression/interaction, or do not naturally reproduce the hallmarks of complex systems such as the scale-free degree distribution of small-worldness. Previously, a graph filtering technique called Planar Maximally Filtered Graph (PMFG) has been applied to many real-world data sets such as financial stock prices and gene expression to extract meaningful and relevant interactions. However, PMFG is not suitable for large-scale genomic data due to several drawbacks, such as the high computation complexity O(|V|3), the presence of false-positives due to the maximal planarity constraint, and the inadequacy of the clustering framework. Here, we developed a new co-expression network analysis framework called Multiscale Embedded Gene Co-expression Network Analysis (MEGENA) by: i) introducing quality control of co-expression similarities, ii) parallelizing embedded network construction, and iii) developing a novel clustering technique to identify multi-scale clustering structures in Planar Filtered Networks (PFNs). We applied MEGENA to a series of simulated data and the gene expression data in breast carcinoma and lung adenocarcinoma from The Cancer Genome Atlas (TCGA). MEGENA showed improved performance over well-established clustering methods and co-expression network construction approaches. MEGENA revealed not only meaningful multi-scale organizations of co-expressed gene clusters but also novel targets in breast carcinoma and lung adenocarcinoma. PMID:26618778

  5. Multiscale Embedded Gene Co-expression Network Analysis.

    PubMed

    Song, Won-Min; Zhang, Bin

    2015-11-01

    Gene co-expression network analysis has been shown effective in identifying functional co-expressed gene modules associated with complex human diseases. However, existing techniques to construct co-expression networks require some critical prior information such as predefined number of clusters, numerical thresholds for defining co-expression/interaction, or do not naturally reproduce the hallmarks of complex systems such as the scale-free degree distribution of small-worldness. Previously, a graph filtering technique called Planar Maximally Filtered Graph (PMFG) has been applied to many real-world data sets such as financial stock prices and gene expression to extract meaningful and relevant interactions. However, PMFG is not suitable for large-scale genomic data due to several drawbacks, such as the high computation complexity O(|V|3), the presence of false-positives due to the maximal planarity constraint, and the inadequacy of the clustering framework. Here, we developed a new co-expression network analysis framework called Multiscale Embedded Gene Co-expression Network Analysis (MEGENA) by: i) introducing quality control of co-expression similarities, ii) parallelizing embedded network construction, and iii) developing a novel clustering technique to identify multi-scale clustering structures in Planar Filtered Networks (PFNs). We applied MEGENA to a series of simulated data and the gene expression data in breast carcinoma and lung adenocarcinoma from The Cancer Genome Atlas (TCGA). MEGENA showed improved performance over well-established clustering methods and co-expression network construction approaches. MEGENA revealed not only meaningful multi-scale organizations of co-expressed gene clusters but also novel targets in breast carcinoma and lung adenocarcinoma.

  6. The correct estimate of the probability of false detection of the matched filter in weak-signal detection problems

    NASA Astrophysics Data System (ADS)

    Vio, R.; Andreani, P.

    2016-05-01

    The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.

  7. UltiMatch-NL: A Web Service Matchmaker Based on Multiple Semantic Filters

    PubMed Central

    Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba

    2014-01-01

    In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters. PMID:25157872

  8. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  9. Autonomous Correction of Sensor Data Applied to Building Technologies Using Filtering Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castello, Charles C; New, Joshua Ryan; Smith, Matt K

    2013-01-01

    Sensor data validity is extremely important in a number of applications, particularly building technologies where collected data are used to determine performance. An example of this is Oak Ridge National Laboratory s ZEBRAlliance research project, which consists of four single-family homes located in Oak Ridge, TN. The homes are outfitted with a total of 1,218 sensors to determine the performance of a variety of different technologies integrated within each home. Issues arise with such a large amount of sensors, such as missing or corrupt data. This paper aims to eliminate these problems using: (1) Kalman filtering and (2) linear predictionmore » filtering techniques. Five types of data are the focus of this paper: (1) temperature; (2) humidity; (3) energy consumption; (4) pressure; and (5) airflow. Simulations show the Kalman filtering method performed best in predicting temperature, humidity, pressure, and airflow data, while the linear prediction filtering method performed best with energy consumption data.« less

  10. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  11. UltiMatch-NL: a Web service matchmaker based on multiple semantic filters.

    PubMed

    Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba

    2014-01-01

    In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters.

  12. Development of an acoustic filter for parametric loudspeaker using phononic crystals.

    PubMed

    Ji, Peifeng; Hu, Wenlin; Yang, Jun

    2016-04-01

    The spurious signal generated as a result of nonlinearity at the receiving system affects the measurement of the difference-frequency sound in the parametric loudspeaker, especially in the nearfield or near the beam axis. In this paper, an acoustic filter is designed using phononic crystals and its theoretical simulations are carried out by quasi-one- and two-dimensional models with Comsol Multiphysics. According to the simulated transmission loss (TL), an acoustic filter is prototyped consisting of 5×7 aluminum alloy cylinders and its performance is verified experimentally. There is good agreement with the simulation result for TL. After applying our proposed filter in the axial measurement of the parametric loudspeaker, a clear frequency dependence from parametric array effect is detected, which exhibits a good match with the well-known theory described by the Gaussian-beam expansion technique. During the directivity measurement for the parametric loudspeaker, the proposed filter has also proved to be effective and is only needed for small angles. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Design of Complex BPF with Automatic Digital Tuning Circuit for Low-IF Receivers

    NASA Astrophysics Data System (ADS)

    Kondo, Hideaki; Sawada, Masaru; Murakami, Norio; Masui, Shoichi

    This paper describes the architecture and implementations of an automatic digital tuning circuit for a complex bandpass filter (BPF) in a low-power and low-cost transceiver for applications such as personal authentication and wireless sensor network systems. The architectural design analysis demonstrates that an active RC filter in a low-IF architecture can be at least 47.7% smaller in area than a conventional gm-C filter; in addition, it features a simple implementation of an associated tuning circuit. The principle of simultaneous tuning of both the center frequency and bandwidth through calibration of a capacitor array is illustrated as based on an analysis of filter characteristics, and a scalable automatic digital tuning circuit with simple analog blocks and control logic having only 835 gates is introduced. The developed capacitor tuning technique can achieve a tuning error of less than ±3.5% and lower a peaking in the passband filter characteristics. An experimental complex BPF using 0.18µm CMOS technology can successfully reduce the tuning error from an initial value of -20% to less than ±2.5% after tuning. The filter block dimensions are 1.22mm × 1.01mm; and in measurement results of the developed complex BPF with the automatic digital tuning circuit, current consumption is 705µA and the image rejection ratio is 40.3dB. Complete evaluation of the BPF indicates that this technique can be applied to low-power, low-cost transceivers.

  14. Prediction of load threshold of fibre-reinforced laminated composite panels subjected to low velocity drop-weight impact using efficient data filtering techniques

    NASA Astrophysics Data System (ADS)

    Farooq, Umar; Myler, Peter

    This work is concerned with physical testing of carbon fibrous laminated composite panels with low velocity drop-weight impacts from flat and round nose impactors. Eight, sixteen, and twenty-four ply panels were considered. Non-destructive damage inspections of tested specimens were conducted to approximate impact-induced damage. Recorded data were correlated to load-time, load-deflection, and energy-time history plots to interpret impact induced damage. Data filtering techniques were also applied to the noisy data that unavoidably generate due to limitations of testing and logging systems. Built-in, statistical, and numerical filters effectively predicted load thresholds for eight and sixteen ply laminates. However, flat nose impact of twenty-four ply laminates produced clipped data that can only be de-noised involving oscillatory algorithms. Data filtering and extrapolation of such data have received rare attention in the literature that needs to be investigated. The present work demonstrated filtering and extrapolation of the clipped data using Fast Fourier Convolution algorithm to predict load thresholds. Selected results were compared to the damage zones identified with C-scan and acceptable agreements have been observed. Based on the results it is proposed that use of advanced data filtering and analysis methods to data collected by the available resources has effectively enhanced data interpretations without resorting to additional resources. The methodology could be useful for efficient and reliable data analysis and impact-induced damage prediction of similar cases' data.

  15. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE PAGES

    Angland, P.; Haberberger, D.; Ivancic, S. T.; ...

    2017-10-30

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  16. Angular filter refractometry analysis using simulated annealing [An improved method for characterizing plasma density profiles using angular filter refractometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angland, P.; Haberberger, D.; Ivancic, S. T.

    Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less

  17. Wiener filtering of the COBE Differential Microwave Radiometer data

    NASA Technical Reports Server (NTRS)

    Bunn, Emory F.; Fisher, Karl B.; Hoffman, Yehuda; Lahav, Ofer; Silk, Joseph; Zaroubi, Saleem

    1994-01-01

    We derive an optimal linear filter to suppress the noise from the cosmic background explorer satellite (COBE) Differential Microwave Radiometer (DMR) sky maps for a given power spectrum. We then apply the filter to the first-year DMR data, after removing pixels within 20 deg of the Galactic plane from the data. We are able to identify particular hot and cold spots in the filtered maps at a level 2 to 3 times the noise level. We use the formalism of constrained realizations of Gaussian random fields to assess the uncertainty in the filtered sky maps. In addition to improving the signal-to-noise ratio of the map as a whole, these techniques allow us to recover some information about the cosmic microwave background anisotropy in the missing Galactic plane region. From these maps we are able to determine which hot and cold spots in the data are statistically significant, and which may have been produced by noise. In addition, the filtered maps can be used for comparison with other experiments on similar angular scales.

  18. Career Goal-Based E-Learning Recommendation Using Enhanced Collaborative Filtering and PrefixSpan

    ERIC Educational Resources Information Center

    Ma, Xueying; Ye, Lu

    2018-01-01

    This article describes how e-learning recommender systems nowadays have applied different kinds of techniques to recommend personalized learning content for users based on their preference, goals, interests and background information. However, the cold-start problem which exists in traditional recommendation algorithms are still left over in…

  19. Mississippi State University Center for Air Sea Technology. FY93 and FY 94 Research Program in Navy Ocean Modeling and Prediction

    DTIC Science & Technology

    1994-09-30

    relational versus object oriented DBMS, knowledge discovery, data models, rnetadata, data filtering, clustering techniques, and synthetic data. A secondary...The first was the investigation of Al/ES Lapplications (knowledge discovery, data mining, and clustering ). Here CAST collabo.rated with Dr. Fred Petry...knowledge discovery system based on clustering techniques; implemented an on-line data browser to the DBMS; completed preliminary efforts to apply object

  20. A Matched Filter Technique for Slow Radio Transient Detection and First Demonstration with the Murchison Widefield Array

    NASA Astrophysics Data System (ADS)

    Feng, L.; Vaulin, R.; Hewitt, J. N.; Remillard, R.; Kaplan, D. L.; Murphy, Tara; Kudryavtseva, N.; Hancock, P.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Gaensler, B. M.; Greenhill, L. J.; Hazelton, B. J.; Johnston-Hollitt, M.; Lonsdale, C. J.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.

    2017-03-01

    Many astronomical sources produce transient phenomena at radio frequencies, but the transient sky at low frequencies (<300 MHz) remains relatively unexplored. Blind surveys with new wide-field radio instruments are setting increasingly stringent limits on the transient surface density on various timescales. Although many of these instruments are limited by classical confusion noise from an ensemble of faint, unresolved sources, one can in principle detect transients below the classical confusion limit to the extent that the classical confusion noise is independent of time. We develop a technique for detecting radio transients that is based on temporal matched filters applied directly to time series of images, rather than relying on source-finding algorithms applied to individual images. This technique has well-defined statistical properties and is applicable to variable and transient searches for both confusion-limited and non-confusion-limited instruments. Using the Murchison Widefield Array as an example, we demonstrate that the technique works well on real data despite the presence of classical confusion noise, sidelobe confusion noise, and other systematic errors. We searched for transients lasting between 2 minutes and 3 months. We found no transients and set improved upper limits on the transient surface density at 182 MHz for flux densities between ˜20 and 200 mJy, providing the best limits to date for hour- and month-long transients.

  1. Wab-InSAR: a new wavelet based InSAR time series technique applied to volcanic and tectonic areas

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Shirzaei, M.; Nankali, H.; Roustaei, M.

    2009-12-01

    Modern geodetic techniques such as InSAR and GPS provide valuable observations of the deformation field. Because of the variety of environmental interferences (e.g., atmosphere, topography distortion) and incompleteness of the models (assumption of the linear model for deformation), those observations are usually tainted by various systematic and random errors. Therefore we develop and test new methods to identify and filter unwanted periodic or episodic artifacts to obtain accurate and precise deformation measurements. Here we present and implement a new wavelet based InSAR (Wab-InSAR) time series approach. Because wavelets are excellent tools for identifying hidden patterns and capturing transient signals, we utilize wavelet functions for reducing the effect of atmospheric delay and digital elevation model inaccuracies. Wab-InSAR is a model free technique, reducing digital elevation model errors in individual interferograms using a 2D spatial Legendre polynomial wavelet filter. Atmospheric delays are reduced using a 3D spatio-temporal wavelet transform algorithm and a novel technique for pixel selection. We apply Wab-InSAR to several targets, including volcano deformation processes at Hawaii Island, and mountain building processes in Iran. Both targets are chosen to investigate large and small amplitude signals, variable and complex topography and atmospheric effects. In this presentation we explain different steps of the technique, validate the results by comparison to other high resolution processing methods (GPS, PS-InSAR, SBAS) and discuss the geophysical results.

  2. Wavelet theory applied to the study of spectra of trans-Neptunian objects

    NASA Astrophysics Data System (ADS)

    Souza-Feliciano, A. C.; Alvarez-Candal, A.; Jiménez-Teja, Y.

    2018-06-01

    Context. Reflection spectroscopy in the near-infrared (NIR) is used to investigate the surface composition of trans-Neptunian objects (TNOs). In general, these spectra are difficult to interpret due to the low apparent brightness of the TNOs, causing low signal-to-noise ratio even in spectra obtained with the largest telescopes available on Earth, making it necessary to use filtering techniques to analyze and interpret them. Aims: The purpose of this paper is to present a methodology to analyze the spectra of TNOs. Specifically, our aim was to filter these spectra in the best possible way: maximizing noise removal, while minimizing the loss of signal. Methods: We used wavelets to filter the spectra. Wavelets are a mathematical tool that decompose the signal into its constituent parts, allowing us to analyze the data in different areas of frequencies with the resolution of each component tied to its scale. To check the reliability of our method, we compared the filtered spectra with the spectra of water and methanol ices to identify some common structures between them. Results: Of the 50 TNOs in our sample, we identify traces of water ices and methanol in the spectra of several of them, some with previous reports, while for other objects there were no previous reports. Conclusions: We conclude that the wavelet technique is successful in filtering spectra of TNOs.

  3. Microscopy with spatial filtering for sorting particles and monitoring subcellular morphology

    NASA Astrophysics Data System (ADS)

    Zheng, Jing-Yi; Qian, Zhen; Pasternack, Robert M.; Boustany, Nada N.

    2009-02-01

    Optical scatter imaging (OSI) was developed to non-invasively track real-time changes in particle morphology with submicron sensitivity in situ without exogenous labeling, cell fixing, or organelle isolation. For spherical particles, the intensity ratio of wide-to-narrow angle scatter (OSIR, Optical Scatter Image Ratio) was shown to decrease monotonically with diameter and agree with Mie theory. In living cells, we recently reported this technique is able to detect mitochondrial morphological alterations, which were mediated by the Bcl-xL transmembrane domain, and could not be observed by fluorescence or differential interference contrast images. Here we further extend the ability of morphology assessment by adopting a digital micromirror device (DMD) for Fourier filtering. When placed in the Fourier plane the DMD can be used to select scattering intensities at desired combination of scattering angles. We designed an optical filter bank consisting of Gabor-like filters with various scales and rotations based on Gabor filters, which have been widely used for localization of spatial and frequency information in digital images and texture analysis. Using a model system consisting of mixtures of polystyrene spheres and bacteria, we show how this system can be used to sort particles on a microscopic slide based on their size, orientation and aspect ratio. We are currently applying this technique to characterize the morphology of subcellular organelles to help understand fundamental biological processes.

  4. B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms

    NASA Astrophysics Data System (ADS)

    Bueno, G.; Sánchez, S.; Ruiz, M.

    2006-10-01

    Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.

  5. Computerized evaluation of holographic interferograms for fatigue crack detection in riveted lap joints

    NASA Astrophysics Data System (ADS)

    Zhou, Xiang

    Using an innovative portable holographic inspection and testing system (PHITS) developed at the Australian Defence Force Academy, fatigue cracks in riveted lap joints can be detected by visually inspecting the abnormal fringe changes recorded on holographic interferograms. In this thesis, for automatic crack detection, some modern digital image processing techniques are investigated and applied to holographic interferogram evaluation. Fringe analysis algorithms are developed for identification of the crack-induced fringe changes. Theoretical analysis of PHITS and riveted lap joints and two typical experiments demonstrate that the fatigue cracks in lightly-clamped joints induce two characteristic fringe changes: local fringe discontinuities at the cracking sites; and the global crescent fringe distribution near to the edge of the rivet hole. Both of the fringe features are used for crack detection in this thesis. As a basis of the fringe feature extraction, an algorithm for local fringe orientation calculation is proposed. For high orientation accuracy and computational efficiency, Gaussian gradient filtering and neighboring direction averaging are used to minimize the effects of image background variations and random noise. The neighboring direction averaging is also used to approximate the fringe directions in centerlines of bright and dark fringes. Experimental results indicate that for high orientation accuracy the scales of the Gaussian filter and neighboring direction averaging should be chosen according to the local fringe spacings. The orientation histogram technique is applied to detect the local fringe discontinuity due to the fatigue cracks. The Fourier descriptor technique is used to characterize the global fringe distribution change from a circular to a crescent distribution with the fatigue crack growth. Experiments and computer simulations are conducted to analyze the detectability and reliability of crack detection using the two techniques. Results demonstrate that the Fourier descriptor technique is more promising in the detection of the short cracks near the edge of the rivet head. However, it is not as reliable as the fringe orientation technique for detection of the long through cracks. For reliability, both techniques should be used in practical crack detection. Neither the Fourier descriptor technique nor the orientation histogram technique have been previously applied to holographic interferometry. While this work related primarily to interferograms of cracked rivets, the techniques would be readily applied to other areas of fringe pattern analysis.

  6. Estimating representative background PM2.5 concentration in heavily polluted areas using baseline separation technique and chemical mass balance model

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Yang, Wen; Zhang, Hui; Sun, Yanling; Mao, Jian; Ma, Zhenxing; Cong, Zhiyuan; Zhang, Xian; Tian, Shasha; Azzi, Merched; Chen, Li; Bai, Zhipeng

    2018-02-01

    The determination of background concentration of PM2.5 is important to understand the contribution of local emission sources to total PM2.5 concentration. The purpose of this study was to exam the performance of baseline separation techniques to estimate PM2.5 background concentration. Five separation methods, which included recursive digital filters (Lyne-Hollick, one-parameter algorithm, and Boughton two-parameter algorithm), sliding interval and smoothed minima, were applied to one-year PM2.5 time-series data in two heavily polluted cities, Tianjin and Jinan. To obtain the proper filter parameters and recession constants for the separation techniques, we conducted regression analysis at a background site during the emission reduction period enforced by the Government for the 2014 Asia-Pacific Economic Cooperation (APEC) meeting in Beijing. Background concentrations in Tianjin and Jinan were then estimated by applying the determined filter parameters and recession constants. The chemical mass balance (CMB) model was also applied to ascertain the effectiveness of the new approach. Our results showed that the contribution of background PM concentration to ambient pollution was at a comparable level to the contribution obtained from the previous study. The best performance was achieved using the Boughton two-parameter algorithm. The background concentrations were estimated at (27 ± 2) μg/m3 for the whole year, (34 ± 4) μg/m3 for the heating period (winter), (21 ± 2) μg/m3 for the non-heating period (summer), and (25 ± 2) μg/m3 for the sandstorm period in Tianjin. The corresponding values in Jinan were (30 ± 3) μg/m3, (40 ± 4) μg/m3, (24 ± 5) μg/m3, and (26 ± 2) μg/m3, respectively. The study revealed that these baseline separation techniques are valid for estimating levels of PM2.5 air pollution, and that our proposed method has great potential for estimating the background level of other air pollutants.

  7. Usability-driven pruning of large ontologies: the case of SNOMED CT

    PubMed Central

    Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan

    2012-01-01

    Objectives To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Materials and Methods Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Results Graph-traversal heuristics provided high coverage (71–96% of terms in the test sets of discharge summaries) at the expense of subset size (17–51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24–55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Discussion Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Conclusion Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available. PMID:22268217

  8. Weighted hybrid technique for recommender system

    NASA Astrophysics Data System (ADS)

    Suriati, S.; Dwiastuti, Meisyarah; Tulus, T.

    2017-12-01

    Recommender system becomes very popular and has important role in an information system or webpages nowadays. A recommender system tries to make a prediction of which item a user may like based on his activity on the system. There are some familiar techniques to build a recommender system, such as content-based filtering and collaborative filtering. Content-based filtering does not involve opinions from human to make the prediction, while collaborative filtering does, so collaborative filtering can predict more accurately. However, collaborative filtering cannot give prediction to items which have never been rated by any user. In order to cover the drawbacks of each approach with the advantages of other approach, both approaches can be combined with an approach known as hybrid technique. Hybrid technique used in this work is weighted technique in which the prediction score is combination linear of scores gained by techniques that are combined.The purpose of this work is to show how an approach of weighted hybrid technique combining content-based filtering and item-based collaborative filtering can work in a movie recommender system and to show the performance comparison when both approachare combined and when each approach works alone. There are three experiments done in this work, combining both techniques with different parameters. The result shows that the weighted hybrid technique that is done in this work does not really boost the performance up, but it helps to give prediction score for unrated movies that are impossible to be recommended by only using collaborative filtering.

  9. Assessment of Snared-Loop Technique When Standard Retrieval of Inferior Vena Cava Filters Fails

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doody, Orla, E-mail: orla_doody@hotmail.com; Noe, Geertje; Given, Mark F.

    Purpose To identify the success and complications related to a variant technique used to retrieve inferior vena cava filters when simple snare approach has failed. Methods A retrospective review of all Cook Guenther Tulip filters and Cook Celect filters retrieved between July 2006 and February 2008 was performed. During this period, 130 filter retrievals were attempted. In 33 cases, the standard retrieval technique failed. Retrieval was subsequently attempted with our modified retrieval technique. Results The retrieval was successful in 23 cases (mean dwell time, 171.84 days; range, 5-505 days) and unsuccessful in 10 cases (mean dwell time, 162.2 days; range,more » 94-360 days). Our filter retrievability rates increased from 74.6% with the standard retrieval method to 92.3% when the snared-loop technique was used. Unsuccessful retrieval was due to significant endothelialization (n = 9) and caval penetration by the filter (n = 1). A single complication occurred in the group, in a patient developing pulmonary emboli after attempted retrieval. Conclusion The technique we describe increased the retrievability of the two filters studied. Hook endothelialization is the main factor resulting in failed retrieval and continues to be a limitation with these filters.« less

  10. X-31 aerodynamic characteristics determined from flight data

    NASA Technical Reports Server (NTRS)

    Kokolios, Alex

    1993-01-01

    The lateral aerodynamic characteristics of the X-31 were determined at angles of attack ranging from 20 to 45 deg. Estimates of the lateral stability and control parameters were obtained by applying two parameter estimation techniques, linear regression, and the extended Kalman filter to flight test data. An attempt to apply maximum likelihood to extract parameters from the flight data was also made but failed for the reasons presented. An overview of the System Identification process is given. The overview includes a listing of the more important properties of all three estimation techniques that were applied to the data. A comparison is given of results obtained from flight test data and wind tunnel data for four important lateral parameters. Finally, future research to be conducted in this area is discussed.

  11. Nanoceramics for blood-borne virus removal.

    PubMed

    Zhao, Yufeng; Sugiyama, Sadahiro; Miller, Thomas; Miao, Xigeng

    2008-05-01

    The development of nanoscience and nanotechnology in the field of ceramics has brought new opportunities for the development of virus-removal techniques. A number of nanoceramics, including nanostructured alumina, titania and zirconia, have been introduced for the applications in virus removal or separation. Filtration or adsorption of viruses, and thus the removal of viruses through nanoceramics, such as nanoporous/mesoporous ceramic membranes, ceramic nanofibers and ceramic nanoparticles, will make it possible to produce an efficient system for virus removal from blood and one with excellent chemical/thermal stability. Currently, nanoceramic membranes and filters based on sol-gel alumina membranes and NanoCeram nanofiber filters have been commercialized and applied to remove viruses from the blood. Nevertheless, filtration using nanoporous filters is limited to the removal of only free viruses in the bloodstream.

  12. Development of a variable structure-based fault detection and diagnosis strategy applied to an electromechanical system

    NASA Astrophysics Data System (ADS)

    Gadsden, S. Andrew; Kirubarajan, T.

    2017-05-01

    Signal processing techniques are prevalent in a wide range of fields: control, target tracking, telecommunications, robotics, fault detection and diagnosis, and even stock market analysis, to name a few. Although first introduced in the 1950s, the most popular method used for signal processing and state estimation remains the Kalman filter (KF). The KF offers an optimal solution to the estimation problem under strict assumptions. Since this time, a number of other estimation strategies and filters were introduced to overcome robustness issues, such as the smooth variable structure filter (SVSF). In this paper, properties of the SVSF are explored in an effort to detect and diagnosis faults in an electromechanical system. The results are compared with the KF method, and future work is discussed.

  13. Novel and Advanced Techniques for Complex IVC Filter Retrieval.

    PubMed

    Daye, Dania; Walker, T Gregory

    2017-04-01

    Inferior vena cava (IVC) filter placement is indicated for the treatment of venous thromboembolism (VTE) in patients with a contraindication to or a failure of anticoagulation. With the advent of retrievable IVC filters and their ease of placement, an increasing number of such filters are being inserted for prophylaxis in patients at high risk for VTE. Available data show that only a small number of these filters are retrieved within the recommended period, if at all, prompting the FDA to issue a statement on the need for their timely removal. With prolonged dwell times, advanced techniques may be needed for filter retrieval in up to 60% of the cases. In this article, we review standard and advanced IVC filter retrieval techniques including single-access, dual-access, and dissection techniques. Complicated filter retrievals carry a non-negligible risk for complications such as filter fragmentation and resultant embolization of filter components, venous pseudoaneurysms or stenoses, and breach of the integrity of the caval wall. Careful pre-retrieval assessment of IVC filter position, any significant degree of filter tilting or of hook, and/or strut epithelialization and caval wall penetration by filter components should be considered using dedicated cross-sectional imaging for procedural planning. In complex cases, the risk for retrieval complications should be carefully weighed against the risks of leaving the filter permanently indwelling. The decision to remove an embedded IVC filter using advanced techniques should be individualized to each patient and made with caution, based on the patient's age and existing comorbidities.

  14. Günther Tulip inferior vena cava filter retrieval using a bidirectional loop-snare technique.

    PubMed

    Ross, Jordan; Allison, Stephen; Vaidya, Sandeep; Monroe, Eric

    2016-01-01

    Many advanced techniques have been reported in the literature for difficult Günther Tulip filter removal. This report describes a bidirectional loop-snare technique in the setting of a fibrin scar formation around the filter leg anchors. The bidirectional loop-snare technique allows for maximal axial tension and alignment for stripping fibrin scar from the filter legs, a commonly encountered complication of prolonged dwell times.

  15. Improving the Held and Karp Approach with Constraint Programming

    NASA Astrophysics Data System (ADS)

    Benchimol, Pascal; Régin, Jean-Charles; Rousseau, Louis-Martin; Rueher, Michel; van Hoeve, Willem-Jan

    Held and Karp have proposed, in the early 1970s, a relaxation for the Traveling Salesman Problem (TSP) as well as a branch-and-bound procedure that can solve small to modest-size instances to optimality [4, 5]. It has been shown that the Held-Karp relaxation produces very tight bounds in practice, and this relaxation is therefore applied in TSP solvers such as Concorde [1]. In this short paper we show that the Held-Karp approach can benefit from well-known techniques in Constraint Programming (CP) such as domain filtering and constraint propagation. Namely, we show that filtering algorithms developed for the weighted spanning tree constraint [3, 8] can be adapted to the context of the Held and Karp procedure. In addition to the adaptation of existing algorithms, we introduce a special-purpose filtering algorithm based on the underlying mechanisms used in Prim's algorithm [7]. Finally, we explored two different branching schemes to close the integrality gap. Our initial experimental results indicate that the addition of the CP techniques to the Held-Karp method can be very effective.

  16. A New Adaptive Framework for Collaborative Filtering Prediction

    PubMed Central

    Almosallam, Ibrahim A.; Shang, Yi

    2010-01-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix’s system. PMID:21572924

  17. A New Adaptive Framework for Collaborative Filtering Prediction.

    PubMed

    Almosallam, Ibrahim A; Shang, Yi

    2008-06-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.

  18. Label-free optical lymphangiography: development of an automatic segmentation method applied to optical coherence tomography to visualize lymphatic vessels using Hessian filters

    PubMed Central

    Yousefi, Siavash; Qin, Jia; Zhi, Zhongwei

    2013-01-01

    Abstract. Lymphatic vessels are a part of the circulatory system that collect plasma and other substances that have leaked from the capillaries into interstitial fluid (lymph) and transport lymph back to the circulatory system. Since lymph is transparent, lymphatic vessels appear as dark hallow vessel-like regions in optical coherence tomography (OCT) cross sectional images. We propose an automatic method to segment lymphatic vessel lumen from OCT structural cross sections using eigenvalues of Hessian filters. Compared to the existing method based on intensity threshold, Hessian filters are more selective on vessel shape and less sensitive to intensity variations and noise. Using this segmentation technique along with optical micro-angiography allows label-free noninvasive simultaneous visualization of blood and lymphatic vessels in vivo. Lymphatic vessels play an important role in cancer, immune system response, inflammatory disease, wound healing and tissue regeneration. Development of imaging techniques and visualization tools for lymphatic vessels is valuable in understanding the mechanisms and studying therapeutic methods in related disease and tissue response. PMID:23922124

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Son, Young-Sun; Yoon, Wang-Jung

    The purpose of this study is to map pyprophyllite distribution at surface of the Nohwa deposit, Korea by using Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER) data. For this, combined Spectral Angle Mapper (SAM), and Matched Filtering (MF) technique based on mathematical algorithm was applied. The regional distribution of high-grade and low-grade pyrophyllite in the Nohwa deposit area could be differentiated by this method. The results of this study show that ASTER data analysis using combination of SAM and MF techniques will assist in exploration of pyrophyllite at the exposed surface.

  20. NTilt as an improved enhanced tilt derivative filter for edge detection of potential field anomalies

    NASA Astrophysics Data System (ADS)

    Nasuti, Yasin; Nasuti, Aziz

    2018-07-01

    We develop a new phase-based filter to enhance the edges of geological sources from potential-field data called NTilt, which utilizes the vertical derivative of the analytical signal in different orders to the tilt derivative equation. This will equalize signals from sources buried at different depths. In order to evaluate the designed filter, we compared the results obtained from our filter with those from recently applied methods, testing against both synthetic data, and measured data from the Finnmark region of North Norway were used. The results demonstrate that the new filter permits better definition of the edges of causative anomalies, as well as better highlighting several anomalies that either are not shown in tilt derivative and other methods or not very well defined. The proposed technique also shows improvements in delineation of the actual edges of deep-seated anomalies compared to tilt derivative and other methods. The NTilt filter provides more accurate and sharper edges and makes the nearby anomalies more distinguishable, and also can avoid bringing some additional false edges reducing the ambiguity in potential field interpretations. This filter, thus, appears to be promising in providing a better qualitative interpretation of the gravity and magnetic data in comparison with the more commonly used filters.

  1. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  2. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  3. ERS-2 SAR and IRS-1C LISS III data fusion: A PCA approach to improve remote sensing based geological interpretation

    NASA Astrophysics Data System (ADS)

    Pal, S. K.; Majumdar, T. J.; Bhattacharya, Amit K.

    Fusion of optical and synthetic aperture radar data has been attempted in the present study for mapping of various lithologic units over a part of the Singhbhum Shear Zone (SSZ) and its surroundings. ERS-2 SAR data over the study area has been enhanced using Fast Fourier Transformation (FFT) based filtering approach, and also using Frost filtering technique. Both the enhanced SAR imagery have been then separately fused with histogram equalized IRS-1C LISS III image using Principal Component Analysis (PCA) technique. Later, Feature-oriented Principal Components Selection (FPCS) technique has been applied to generate False Color Composite (FCC) images, from which corresponding geological maps have been prepared. Finally, GIS techniques have been successfully used for change detection analysis in the lithological interpretation between the published geological map and the fusion based geological maps. In general, there is good agreement between these maps over a large portion of the study area. Based on the change detection studies, few areas could be identified which need attention for further detailed ground-based geological studies.

  4. Superharmonic imaging with chirp coded excitation: filtering spectrally overlapped harmonics.

    PubMed

    Harput, Sevan; McLaughlan, James; Cowell, David M J; Freear, Steven

    2014-11-01

    Superharmonic imaging improves the spatial resolution by using the higher order harmonics generated in tissue. The superharmonic component is formed by combining the third, fourth, and fifth harmonics, which have low energy content and therefore poor SNR. This study uses coded excitation to increase the excitation energy. The SNR improvement is achieved on the receiver side by performing pulse compression with harmonic matched filters. The use of coded signals also introduces new filtering capabilities that are not possible with pulsed excitation. This is especially important when using wideband signals. For narrowband signals, the spectral boundaries of the harmonics are clearly separated and thus easy to filter; however, the available imaging bandwidth is underused. Wideband excitation is preferable for harmonic imaging applications to preserve axial resolution, but it generates spectrally overlapping harmonics that are not possible to filter in time and frequency domains. After pulse compression, this overlap increases the range side lobes, which appear as imaging artifacts and reduce the Bmode image quality. In this study, the isolation of higher order harmonics was achieved in another domain by using the fan chirp transform (FChT). To show the effect of excitation bandwidth in superharmonic imaging, measurements were performed by using linear frequency modulated chirp excitation with varying bandwidths of 10% to 50%. Superharmonic imaging was performed on a wire phantom using a wideband chirp excitation. Results were presented with and without applying the FChT filtering technique by comparing the spatial resolution and side lobe levels. Wideband excitation signals achieved a better resolution as expected, however range side lobes as high as -23 dB were observed for the superharmonic component of chirp excitation with 50% fractional bandwidth. The proposed filtering technique achieved >50 dB range side lobe suppression and improved the image quality without affecting the axial resolution.

  5. Home Safety

    MedlinePlus

    ... 0–12 Months Apply Babies 0–12 Months filter Big Kids 5–9 Years Apply Big Kids 5–9 Years filter Little Kids 1–4 Years Apply Little Kids 1–4 Years filter Pre-Teens 10-14 Apply Pre-Teens 10- ...

  6. Third-order linearization for self-beating filtered microwave photonic systems using a dual parallel Mach-Zehnder modulator.

    PubMed

    Pérez, Daniel; Gasulla, Ivana; Capmany, José; Fandiño, Javier S; Muñoz, Pascual; Alavi, Hossein

    2016-09-05

    We develop, analyze and apply a linearization technique based on dual parallel Mach-Zehnder modulator to self-beating microwave photonics systems. The approach enables broadband low-distortion transmission and reception at expense of a moderate electrical power penalty yielding a small optical power penalty (<1 dB).

  7. Trigonometric Transforms for Image Reconstruction

    DTIC Science & Technology

    1998-06-01

    applying trigo - nometric transforms to image reconstruction problems. Many existing linear image reconstruc- tion techniques rely on knowledge of...ancestors. The research performed for this dissertation represents the first time the symmetric convolution-multiplication property of trigo - nometric...Fourier domain. The traditional representation of these filters will be similar to new trigo - nometric transform versions derived in later chapters

  8. Advanced Techniques for Removal of Retrievable Inferior Vena Cava Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliescu, Bogdan; Haskal, Ziv J., E-mail: ziv2@mac.com

    Inferior vena cava (IVC) filters have proven valuable for the prevention of primary or recurrent pulmonary embolism in selected patients with or at high risk for venous thromboembolic disease. Their use has become commonplace, and the numbers implanted increase annually. During the last 3 years, in the United States, the percentage of annually placed optional filters, i.e., filters than can remain as permanent filters or potentially be retrieved, has consistently exceeded that of permanent filters. In parallel, the complications of long- or short-term filtration have become increasingly evident to physicians, regulatory agencies, and the public. Most filter removals are uneventful,more » with a high degree of success. When routine filter-retrieval techniques prove unsuccessful, progressively more advanced tools and skill sets must be used to enhance filter-retrieval success. These techniques should be used with caution to avoid damage to the filter or cava during IVC retrieval. This review describes the complex techniques for filter retrieval, including use of additional snares, guidewires, angioplasty balloons, and mechanical and thermal approaches as well as illustrates their specific application.« less

  9. Fusion of WiFi, smartphone sensors and landmarks using the Kalman filter for indoor localization.

    PubMed

    Chen, Zhenghua; Zou, Han; Jiang, Hao; Zhu, Qingchang; Soh, Yeng Chai; Xie, Lihua

    2015-01-05

    Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m.

  10. Fusion of WiFi, Smartphone Sensors and Landmarks Using the Kalman Filter for Indoor Localization

    PubMed Central

    Chen, Zhenghua; Zou, Han; Jiang, Hao; Zhu, Qingchang; Soh, Yeng Chai; Xie, Lihua

    2015-01-01

    Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m. PMID:25569750

  11. Near-infrared spectral image analysis of pork marbling based on Gabor filter and wide line detector techniques.

    PubMed

    Huang, Hui; Liu, Li; Ngadi, Michael O; Gariépy, Claude; Prasher, Shiv O

    2014-01-01

    Marbling is an important quality attribute of pork. Detection of pork marbling usually involves subjective scoring, which raises the efficiency costs to the processor. In this study, the ability to predict pork marbling using near-infrared (NIR) hyperspectral imaging (900-1700 nm) and the proper image processing techniques were studied. Near-infrared images were collected from pork after marbling evaluation according to current standard chart from the National Pork Producers Council. Image analysis techniques-Gabor filter, wide line detector, and spectral averaging-were applied to extract texture, line, and spectral features, respectively, from NIR images of pork. Samples were grouped into calibration and validation sets. Wavelength selection was performed on calibration set by stepwise regression procedure. Prediction models of pork marbling scores were built using multiple linear regressions based on derivatives of mean spectra and line features at key wavelengths. The results showed that the derivatives of both texture and spectral features produced good results, with correlation coefficients of validation of 0.90 and 0.86, respectively, using wavelengths of 961, 1186, and 1220 nm. The results revealed the great potential of the Gabor filter for analyzing NIR images of pork for the effective and efficient objective evaluation of pork marbling.

  12. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  13. A new methodology for quantifying the impact of water repellency on the filtering function of soils

    NASA Astrophysics Data System (ADS)

    Müller, Karin; Deurer, Markus; Kawamoto, Ken; Hiradate, Syuntaro; Komatsu, Toshiko; Clothier, Brent

    2014-05-01

    Soils deliver a range of ecosystem services, and some of the most valuable relate to the regulating services resulting from the buffering and filtering of solutes by soil. However, it is commonly accepted that soil water repellency (SWR) can lead to finger flow and preferential flow. Yet, there have been few attempts to quantify the impact of such flow phenomena on the buffering and filtering of solutes. No method is available to quantify directly how SWR affects the transport of reactive solutes. We have closed this gap and developed a new method for quantifying solute transport by novel experiments with water-repellent soils. It involves sequentially applying two liquids, one water, and the other a reference fully wetting liquid, namely, aqueous ethanol, to the same intact soil core with air-drying between the application of the two liquids. Our results highlight that sorption experiments are necessary to complement our new method to ascertain directly the impact of SWR on the filtering of a solute. We conducted transport and sorption experiments, by applying our new method, with the herbicide 2,4-Dichlorophenoxyacetic acid and two Andosol top-soils; one from Japan and the other one from New Zealand. Breakthrough curves from the water experiments were characterized by preferential flow with high initial concentrations, tailing and a long prevalence of solutes remaining in the soil. Our results clearly demonstrate and quantify the impact of SWR on the leaching of this herbicide. This technique for quantifying the reduction of the soil's filtering efficiency by SWR enables assessment of the increased risk of groundwater contamination by solutes exogenously applied to water-repellent soils.

  14. Transfemoral Filter Eversion Technique following Unsuccessful Retrieval of Option Inferior Vena Cava Filters: A Single Center Experience.

    PubMed

    Posham, Raghuram; Fischman, Aaron M; Nowakowski, Francis S; Bishay, Vivian L; Biederman, Derek M; Virk, Jaskirat S; Kim, Edward; Patel, Rahul S; Lookstein, Robert A

    2017-06-01

    This report describes the technical feasibility of using the filter eversion technique after unsuccessful retrieval attempts of Option and Option ELITE (Argon Medical Devices, Inc, Athens, Texas) inferior vena cava (IVC) filters. This technique entails the use of endoscopic forceps to evert this specific brand of IVC filter into a sheath inserted into the common femoral vein, in the opposite direction in which the filter is designed to be removed. Filter eversion was attempted in 25 cases with a median dwell time of 134 days (range, 44-2,124 d). Retrieval success was 100% (25/25 cases), with an overall complication rate of 8%. This technique warrants further study. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.

  15. Investigation of smoothness-increasing accuracy-conserving filters for improving streamline integration through discontinuous fields.

    PubMed

    Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K

    2008-01-01

    Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.

  16. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  17. The use of the multiwavelet transform for the estimation of surface wave group and phase velocities and their associated uncertainties

    NASA Astrophysics Data System (ADS)

    Poppeliers, C.; Preston, L. A.

    2017-12-01

    Measurements of seismic surface wave dispersion can be used to infer the structure of the Earth's subsurface. Typically, to identify group- and phase-velocity, a series of narrow-band filters are applied to surface wave seismograms. Frequency dependent arrival times of surface waves can then be identified from the resulting suite of narrow band seismograms. The frequency-dependent velocity estimates are then inverted for subsurface velocity structure. However, this technique has no method to estimate the uncertainty of the measured surface wave velocities, and subsequently there is no estimate of uncertainty on, for example, tomographic results. For the work here, we explore using the multiwavelet transform (MWT) as an alternate method to estimate surface wave speeds. The MWT decomposes a signal similarly to the conventional filter bank technique, but with two primary advantages: 1) the time-frequency localization is optimized in regard to the time-frequency tradeoff, and 2) we can use the MWT to estimate the uncertainty of the resulting surface wave group- and phase-velocities. The uncertainties of the surface wave speed measurements can then be propagated into tomographic inversions to provide uncertainties of resolved Earth structure. As proof-of-concept, we apply our technique to four seismic ambient noise correlograms that were collected from the University of Nevada Reno seismic network near the Nevada National Security Site. We invert the estimated group- and phase-velocities, as well the uncertainties, for 1-D Earth structure for each station pair. These preliminary results generally agree with 1-D velocities that are obtained from inverting dispersion curves estimated from a conventional Gaussian filter bank.

  18. Fabrication and characterization of optical super-smooth surfaces

    NASA Astrophysics Data System (ADS)

    Schmitt, Dirk-Roger; Kratz, Frank; Ringel, Gabriele A.; Mangelsdorf, Juergen; Creuzet, Francois; Garratt, John D.

    1995-08-01

    Intercomparison roughness measurements have been carried out at supersmooth artefacts fabricated from BK7, fused silica, and Zerodur. The surface parameters were determined using a special prototype of the mechanical profiler Nanostep (Rank Taylor Hobson), the Optical Heterodyne Profiler Z5500 (Zygo), and an Atomic Force Microscope (Park Scientific) with an improved acquisition technique. The intercomparison was performed after the range of collected spatial wavelength for each instrument was adjusted using digital filtering techniques. It is demonstrated for different roughness ranges that are applied superpolishing techniques yield supersmooth artefacts which can be used for more intercomparisons.

  19. Thin-film tunable filters for hyperspectral fluorescence microscopy

    PubMed Central

    Favreau, Peter; Hernandez, Clarissa; Lindsey, Ashley Stringfellow; Alvarez, Diego F.; Rich, Thomas; Prabhat, Prashant

    2013-01-01

    Abstract. Hyperspectral imaging is a powerful tool that acquires data from many spectral bands, forming a contiguous spectrum. Hyperspectral imaging was originally developed for remote sensing applications; however, hyperspectral techniques have since been applied to biological fluorescence imaging applications, such as fluorescence microscopy and small animal fluorescence imaging. The spectral filtering method largely determines the sensitivity and specificity of any hyperspectral imaging system. There are several types of spectral filtering hardware available for microscopy systems, most commonly acousto-optic tunable filters (AOTFs) and liquid crystal tunable filters (LCTFs). These filtering technologies have advantages and disadvantages. Here, we present a novel tunable filter for hyperspectral imaging—the thin-film tunable filter (TFTF). The TFTF presents several advantages over AOTFs and LCTFs, most notably, a high percentage transmission and a high out-of-band optical density (OD). We present a comparison of a TFTF-based hyperspectral microscopy system and a commercially available AOTF-based system. We have characterized the light transmission, wavelength calibration, and OD of both systems, and have then evaluated the capability of each system for discriminating between green fluorescent protein and highly autofluorescent lung tissue. Our results suggest that TFTFs are an alternative approach for hyperspectral filtering that offers improved transmission and out-of-band blocking. These characteristics make TFTFs well suited for other biomedical imaging devices, such as ophthalmoscopes or endoscopes. PMID:24077519

  20. High-resolution chromatography/time-of-flight MSE with in silico data mining is an information-rich approach to reactive metabolite screening.

    PubMed

    Barbara, Joanna E; Castro-Perez, Jose M

    2011-10-30

    Electrophilic reactive metabolite screening by liquid chromatography/mass spectrometry (LC/MS) is commonly performed during drug discovery and early-stage drug development. Accurate mass spectrometry has excellent utility in this application, but sophisticated data processing strategies are essential to extract useful information. Herein, a unified approach to glutathione (GSH) trapped reactive metabolite screening with high-resolution LC/TOF MS(E) analysis and drug-conjugate-specific in silico data processing was applied to rapid analysis of test compounds without the need for stable- or radio-isotope-labeled trapping agents. Accurate mass defect filtering (MDF) with a C-heteroatom dealkylation algorithm dynamic with mass range was compared to linear MDF and shown to minimize false positive results. MS(E) data-filtering, time-alignment and data mining post-acquisition enabled detection of 53 GSH conjugates overall formed from 5 drugs. Automated comparison of sample and control data in conjunction with the mass defect filter enabled detection of several conjugates that were not evident with mass defect filtering alone. High- and low-energy MS(E) data were time-aligned to generate in silico product ion spectra which were successfully applied to structural elucidation of detected GSH conjugates. Pseudo neutral loss and precursor ion chromatograms derived post-acquisition demonstrated 50.9% potential coverage, at best, of the detected conjugates by any individual precursor or neutral loss scan type. In contrast with commonly applied neutral loss and precursor-based techniques, the unified method has the advantage of applicability across different classes of GSH conjugates. The unified method was also successfully applied to cyanide trapping analysis and has potential for application to alternate trapping agents. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Measuring User Similarity Using Electric Circuit Analysis: Application to Collaborative Filtering

    PubMed Central

    Yang, Joonhyuk; Kim, Jinwook; Kim, Wonjoon; Kim, Young Hwan

    2012-01-01

    We propose a new technique of measuring user similarity in collaborative filtering using electric circuit analysis. Electric circuit analysis is used to measure the potential differences between nodes on an electric circuit. In this paper, by applying this method to transaction networks comprising users and items, i.e., user–item matrix, and by using the full information about the relationship structure of users in the perspective of item adoption, we overcome the limitations of one-to-one similarity calculation approach, such as the Pearson correlation, Tanimoto coefficient, and Hamming distance, in collaborative filtering. We found that electric circuit analysis can be successfully incorporated into recommender systems and has the potential to significantly enhance predictability, especially when combined with user-based collaborative filtering. We also propose four types of hybrid algorithms that combine the Pearson correlation method and electric circuit analysis. One of the algorithms exceeds the performance of the traditional collaborative filtering by 37.5% at most. This work opens new opportunities for interdisciplinary research between physics and computer science and the development of new recommendation systems PMID:23145095

  2. Measuring user similarity using electric circuit analysis: application to collaborative filtering.

    PubMed

    Yang, Joonhyuk; Kim, Jinwook; Kim, Wonjoon; Kim, Young Hwan

    2012-01-01

    We propose a new technique of measuring user similarity in collaborative filtering using electric circuit analysis. Electric circuit analysis is used to measure the potential differences between nodes on an electric circuit. In this paper, by applying this method to transaction networks comprising users and items, i.e., user-item matrix, and by using the full information about the relationship structure of users in the perspective of item adoption, we overcome the limitations of one-to-one similarity calculation approach, such as the Pearson correlation, Tanimoto coefficient, and Hamming distance, in collaborative filtering. We found that electric circuit analysis can be successfully incorporated into recommender systems and has the potential to significantly enhance predictability, especially when combined with user-based collaborative filtering. We also propose four types of hybrid algorithms that combine the Pearson correlation method and electric circuit analysis. One of the algorithms exceeds the performance of the traditional collaborative filtering by 37.5% at most. This work opens new opportunities for interdisciplinary research between physics and computer science and the development of new recommendation systems.

  3. Speckle reduction of OCT images using an adaptive cluster-based filtering

    NASA Astrophysics Data System (ADS)

    Adabi, Saba; Rashedi, Elaheh; Conforto, Silvia; Mehregan, Darius; Xu, Qiuyun; Nasiriavanaki, Mohammadreza

    2017-02-01

    Optical coherence tomography (OCT) has become a favorable device in the dermatology discipline due to its moderate resolution and penetration depth. OCT images however contain grainy pattern, called speckle, due to the broadband source that has been used in the configuration of OCT. So far, a variety of filtering techniques is introduced to reduce speckle in OCT images. Most of these methods are generic and can be applied to OCT images of different tissues. In this paper, we present a method for speckle reduction of OCT skin images. Considering the architectural structure of skin layers, it seems that a skin image can benefit from being segmented in to differentiable clusters, and being filtered separately in each cluster by using a clustering method and filtering methods such as Wiener. The proposed algorithm was tested on an optical solid phantom with predetermined optical properties. The algorithm was also tested on healthy skin images. The results show that the cluster-based filtering method can reduce the speckle and increase the signal-to-noise ratio and contrast while preserving the edges in the image.

  4. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  5. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  6. Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.

    PubMed

    Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F

    1980-01-01

    Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.

  7. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  8. Satellite retrievals of Karenia brevis harmful algal blooms in the West Florida shelf using neural networks and impacts of temporal variabilities

    NASA Astrophysics Data System (ADS)

    El-Habashi, Ahmed; Duran, Claudia M.; Lovko, Vincent; Tomlinson, Michelle C.; Stumpf, Richard P.; Ahmed, Sam

    2017-07-01

    We apply a neural network (NN) technique to detect/track Karenia brevis harmful algal blooms (KB HABs) plaguing West Florida shelf (WFS) coasts from Visible-Infrared Imaging Radiometer Suite (VIIRS) satellite observations. Previously KB HABs detection primarily relied on the Moderate Resolution Imaging Spectroradiometer Aqua (MODIS-A) satellite, depending on its remote sensing reflectance signal at the 678-nm chlorophyll fluorescence band (Rrs678) needed for normalized fluorescence height and related red band difference retrieval algorithms. VIIRS, MODIS-A's successor, does not have a 678-nm channel. Instead, our NN uses Rrs at 486-, 551-, and 671-nm VIIRS channels to retrieve phytoplankton absorption at 443 nm (a). The retrieved a images are next filtered by applying limits, defined by (i) low Rrs551-nm backscatter and (ii) a minimum a value associated with KB HABs. The filtered residual images are then converted to show chlorophyll-a concentrations [Chla] and KB cell counts. VIIRS retrievals using our NN and five other retrieval algorithms were compared and evaluated against numerous in situ measurements made over the four-year 2012 to 2016 period, for which VIIRS data are available. These comparisons confirm the viability and higher retrieval accuracies of the NN technique, when combined with the filtering constraints, for effective detection of KB HABs. Analysis of these results as well as sequential satellite observations and recent field measurements underline the importance of short-term temporal variabilities on retrieval accuracies.

  9. H(infinity)/H(2)/Kalman filtering of linear dynamical systems via variational techniques with applications to target tracking

    NASA Astrophysics Data System (ADS)

    Rawicz, Paul Lawrence

    In this thesis, the similarities between the structure of the H infinity, H2, and Kalman filters are examined. The filters used in this examination have been derived through duality to the full information controller. In addition, a direct variation of parameters derivation of the Hinfinity filter is presented for both continuous and discrete time (staler case). Direct and controller dual derivations using differential games exist in the literature and also employ variational techniques. Using a variational, rather than a differential games, viewpoint has resulted in a simple relationship between the Riccati equations that arise from the derivation and the results of the Bounded Real Lemma. This same relation has previously been found in the literature and used to relate the Riccati inequality for linear systems to the Hamilton Jacobi inequality for nonlinear systems when implementing the Hinfinity controller. The Hinfinity, H2, and Kalman filters are applied to the two-state target tracking problem. In continuous time, closed form analytic expressions for the trackers and their performance are determined. To evaluate the trackers using a neutral, realistic, criterion, the probability of target escape is developed. That is, the probability that the target position error will be such that the target is outside the radar beam width resulting in a loss of measurement. In discrete time, a numerical example, using the probability of target escape, is presented to illustrate the differences in tracker performance.

  10. Llama heavy-chain antibody fragments efficiently remove toxic shock syndrome toxin 1 from plasma in vitro but not in experimental porcine septic shock.

    PubMed

    Brummelhuis, Walter J; Joles, Jaap A; Stam, Jord C; Adams, Hendrik; Goldschmeding, Roel; Detmers, Frank J; El Khattabi, Mohamed; Maassen, Bram T; Verrips, C Theo; Braam, Branko

    2010-08-01

    Staphylococcus aureus produces the superantigen toxic shock syndrome toxin 1 (TSST-1). When the bacterium invades the human circulation, this toxin can induce life-threatening gram-positive sepsis. Current sepsis treatment does not remove bacterial toxins. Variable domains of llama heavy-chain antibodies (VHH) against toxic shock syndrome toxin 1 ([alpha]-TSST-1 VHH) were previously found to be effective in vitro. We hypothesized that removing TSST-1 with [alpha]-TSST-1 VHH hemofiltration filters would ameliorate experimental sepsis in pigs. After assessing in vitro whether timely removing TSST-1 interrupted TSST-1-induced mononuclear cell TNF-[alpha] production, VHH-coated filters were applied in a porcine sepsis model. Clinical course, survival, plasma interferon [gamma], and TSST-1 levels were similar with and without VHH-coated filters as were TSST-1 concentrations before and after the VHH filter. Plasma TSST-1 levels were much lower than anticipated from the distribution of the amount of infused TSST-1, suggesting compartmentalization to space or adhesion to surface not accessible to hemofiltration or pheresis techniques. Removing TSST-1 from plasma was feasible in vitro. However, the [alpha]-TSST-1 VHH adsorption filter-based technique was ineffective in vivo, indicating that improvement of VHH-based hemofiltration is required. Sequestration likely prevented the adequate removal of TSST-1. The latter warrants further investigation of TSST-1 distribution and clearance in vivo.

  11. Life Cycle Assessment | National Agricultural Library

    Science.gov Websites

    Skip to main content Home National Agricultural Library United States Department of Agriculture Ag ; Livestock (3) Apply Animals & Livestock filter Agricultural Products (1) Apply Agricultural Products filter agricultural equipment (1) Apply agricultural equipment filter agricultural machinery (1) Apply

  12. A Multi-Scale Algorithm for Graffito Advertisement Detection from Images of Real Estate

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhu, Shi-Jiao

    There is a significant need to detect and extract the graffito advertisement embedded in the housing images automatically. However, it is a hard job to separate the advertisement region well since housing images generally have complex background. In this paper, a detecting algorithm which uses multi-scale Gabor filters to identify graffito regions is proposed. Firstly, multi-scale Gabor filters with different directions are applied to housing images, then the approach uses these frequency data to find likely graffito regions using the relationship of different channels, it exploits the ability of different filters technique to solve the detection problem with low computational efforts. Lastly, the method is tested on several real estate images which are embedded graffito advertisement to verify its robustness and efficiency. The experiments demonstrate graffito regions can be detected quite well.

  13. Treatment of late time instabilities in finite-difference EMP scattering codes

    NASA Astrophysics Data System (ADS)

    Simpson, L. T.; Holland, R.; Arman, S.

    1982-12-01

    Constraints applicable to a finite difference mesh for solution of Maxwell's equations are defined. The equations are applied in the time domain for computing electromagnetic coupling to complex structures, e.g., rectangular, cylindrical, or spherical. In a spatially varying grid, the amplitude growth of high frequency waves becomes exponential through multiple reflections from the outer boundary in cases of late-time solution. The exponential growth of the numerical noise exceeds the value of the real signal. The correction technique employs an absorbing surface and a radiating boundary, along with tailored selection of the grid mesh size. High frequency noise is removed through use of a low-pass digital filter, a linear least squares fit is made to thy low frequency filtered response, and the original, filtered, and fitted data are merged to preserve the high frequency early-time response.

  14. Modal parameter identification using the log decrement method and band-pass filters

    NASA Astrophysics Data System (ADS)

    Liao, Yabin; Wells, Valana

    2011-10-01

    This paper presents a time-domain technique for identifying modal parameters of test specimens based on the log-decrement method. For lightly damped multidegree-of-freedom or continuous systems, the conventional method is usually restricted to identification of fundamental-mode parameters only. Implementation of band-pass filters makes it possible for the proposed technique to extract modal information of higher modes. The method has been applied to a polymethyl methacrylate (PMMA) beam for complex modulus identification in the frequency range 10-1100 Hz. Results compare well with those obtained using the Least Squares method, and with those previously published in literature. Then the accuracy of the proposed method has been further verified by experiments performed on a QuietSteel specimen with very low damping. The method is simple and fast. It can be used for a quick estimation of the modal parameters, or as a complementary approach for validation purposes.

  15. Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lee, Chih-Chung

    2006-12-01

    Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.

  16. Real-Time flare detection using guided filter

    NASA Astrophysics Data System (ADS)

    Lin, Jiaben; Deng, Yuanyong; Yuan, Fei; Guo, Juan

    2017-04-01

    A procedure is introduced for the automatic detection of solar flare using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. And then we adopt guided filter, which is first introduced into the astronomical image detection, to enhance the edges of flares and restrain the solar limb darkening. Flares are then detected by modified Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedure, the new procedure has some advantages such as real time and reliability as well as no need of image division and local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result of flares detection shows that the number of flares detected by our procedure is well consistent with the manual one.

  17. Automatic detection of solar features in HSOS full-disk solar images using guided filter

    NASA Astrophysics Data System (ADS)

    Yuan, Fei; Lin, Jiaben; Guo, Jingjing; Wang, Gang; Tong, Liyue; Zhang, Xinwei; Wang, Bingxiang

    2018-02-01

    A procedure is introduced for the automatic detection of solar features using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. Guided filter is adopted to enhance the edges of solar features and restrain the solar limb darkening, which is first introduced into the astronomical target detection. Then specific features are detected by Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedures, our procedure has some advantages such as real time and reliability as well as no need of local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result shows that the number of features detected by our procedure is well consistent with the manual one.

  18. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    PubMed

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  19. Volatility and leachability of heavy metals and radionuclides in thermally treated HEPA filter media generated from nuclear facilities.

    PubMed

    Yoon, In-Ho; Choi, Wang-Kyu; Lee, Suk-Chol; Min, Byung-Youn; Yang, Hee-Chul; Lee, Kune-Woo

    2012-06-15

    The purpose of the present study was to apply thermal treatments to reduce the volume of HEPA filter media and to investigate the volatility and leachability of heavy metals and radionuclides during thermal treatment. HEPA filter media were transformed to glassy bulk material by thermal treatment at 900°C for 2h. The most abundant heavy metal in the HEPA filter media was Zn, followed by Sr, Pb and Cr, and the main radionuclide was Cs-137. The volatility tests showed that the heavy metals and radionuclides in radioactive HEPA filter media were not volatilized during the thermal treatment. PCT tests indicated that the leachability of heavy metals and radionuclides was relatively low compared to those of other glasses. XRD results showed that Zn and Cs reacted with HEPA filter media and were transformed into crystalline willemite (ZnO·SiO(2)) and pollucite (Cs(2)OAl(2)O(3)4SiO(2)), which are not volatile or leachable. The proposed technique for the volume reduction and transformation of radioactive HEPA filter media into glassy bulk material is a simple and energy efficient procedure without additives that can be performed at relatively low temperature compared with conventional vitrification process. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Development of an optimal filter substrate for the identification of small microplastic particles in food by micro-Raman spectroscopy.

    PubMed

    Oßmann, Barbara E; Sarau, George; Schmitt, Sebastian W; Holtmannspötter, Heinrich; Christiansen, Silke H; Dicke, Wilhelm

    2017-06-01

    When analysing microplastics in food, due to toxicological reasons it is important to achieve clear identification of particles down to a size of at least 1 μm. One reliable, optical analytical technique allowing this is micro-Raman spectroscopy. After isolation of particles via filtration, analysis is typically performed directly on the filter surface. In order to obtain high qualitative Raman spectra, the material of the membrane filters should not show any interference in terms of background and Raman signals during spectrum acquisition. To facilitate the usage of automatic particle detection, membrane filters should also show specific optical properties. In this work, beside eight different, commercially available membrane filters, three newly designed metal-coated polycarbonate membrane filters were tested to fulfil these requirements. We found that aluminium-coated polycarbonate membrane filters had ideal characteristics as a substrate for micro-Raman spectroscopy. Its spectrum shows no or minimal interference with particle spectra, depending on the laser wavelength. Furthermore, automatic particle detection can be applied when analysing the filter surface under dark-field illumination. With this new membrane filter, analytics free of interference of microplastics down to a size of 1 μm becomes possible. Thus, an important size class of these contaminants can now be visualized and spectrally identified. Graphical abstract A newly developed aluminium coated polycarbonate membrane filter enables automatic particle detection and generation of high qualitative Raman spectra allowing identification of small microplastics.

  1. A method of incident angle estimation for high resolution spectral recovery in filter-array-based spectrometers

    NASA Astrophysics Data System (ADS)

    Kim, Cheolsun; Lee, Woong-Bi; Ju, Gun Wu; Cho, Jeonghoon; Kim, Seongmin; Oh, Jinkyung; Lim, Dongsung; Lee, Yong Tak; Lee, Heung-No

    2017-02-01

    In recent years, there has been an increasing interest in miniature spectrometers for research and development. Especially, filter-array-based spectrometers have advantages of low cost and portability, and can be applied in various fields such as biology, chemistry and food industry. Miniaturization in optical filters causes degradation of spectral resolution due to limitations on spectral responses and the number of filters. Nowadays, many studies have been reported that the filter-array-based spectrometers have achieved resolution improvements by using digital signal processing (DSP) techniques. The performance of the DSP-based spectral recovery highly depends on the prior information of transmission functions (TFs) of the filters. The TFs vary with respect to an incident angle of light onto the filter-array. Conventionally, it is assumed that the incident angle of light on the filters is fixed and the TFs are known to the DSP. However, the incident angle is inconstant according to various environments and applications, and thus TFs also vary, which leads to performance degradation of spectral recovery. In this paper, we propose a method of incident angle estimation (IAE) for high resolution spectral recovery in the filter-array-based spectrometers. By exploiting sparse signal reconstruction of the L1- norm minimization, IAE estimates an incident angle among all possible incident angles which minimizes the error of the reconstructed signal. Based on IAE, DSP effectively provides a high resolution spectral recovery in the filter-array-based spectrometers.

  2. Application of Data Assimilation with the Root Zone Water Quality Model for Soil Moisture Profile Estimation

    USDA-ARS?s Scientific Manuscript database

    The Ensemble Kalman Filter (EnKF), a popular data assimilation technique for non-linear systems was applied to the Root Zone Water Quality Model. Measured soil moisture data at four different depths (5cm, 20cm, 40cm and 60cm) from two agricultural fields (AS1 and AS2) in northeastern Indiana were us...

  3. Adaptive Control Using Residual Mode Filters Applied to Wind Turbines

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Balas, Mark J.

    2011-01-01

    Many dynamic systems containing a large number of modes can benefit from adaptive control techniques, which are well suited to applications that have unknown parameters and poorly known operating conditions. In this paper, we focus on a model reference direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend this adaptive control theory to accommodate problematic modal subsystems of a plant that inhibit the adaptive controller by causing the open-loop plant to be non-minimum phase. We will augment the adaptive controller using a Residual Mode Filter (RMF) to compensate for problematic modal subsystems, thereby allowing the system to satisfy the requirements for the adaptive controller to have guaranteed convergence and bounded gains. We apply these theoretical results to design an adaptive collective pitch controller for a high-fidelity simulation of a utility-scale, variable-speed wind turbine that has minimum phase zeros.

  4. k-filtering applied to Cluster density measurements in the Solar Wind: Early findings

    NASA Astrophysics Data System (ADS)

    Jeska, Lauren; Roberts, Owen; Li, Xing

    2014-05-01

    Studies of solar wind turbulence indicate that a large proportion of the energy is Alfvénic (incompressible) at inertial scales. The properties of the turbulence found in the dissipation range are still under debate ~ while it is widely believed that kinetic Alfvén waves form the dominant component, the constituents of the remaining compressible turbulence are disputed. Using k-filtering, the power can be measured without assuming the validity of Taylor's hypothesis, and its distribution in (ω, k)-space can be determined to assist the identification of weak turbulence components. This technique is applied to Cluster electron density measurements and compared to the power in |B(t)|. As the direct electron density measurements from the WHISPER instrument have a low cadency of only 2.2s, proxy data derived from the spacecraft potential, measured every 0.2s by the EFW instrument, are used to extend this study to ion scales.

  5. Beam shaping for cosmetic hair removal

    NASA Astrophysics Data System (ADS)

    Lizotte, Todd E.; Tuttle, Tracie

    2007-09-01

    Beam shaping has the potential to provide comfort to people who require or seek laser based cosmetic skin procedures. Of immediate interest is the procedure of aesthetic hair removal. Hair removal is performed using a variety of wavelengths from 480 to 1200 nm by means of filtered Xenon flash lamps (pulsed light) or 810 nm diode lasers. These wavelengths are considered the most efficient means available for hair removal applications, but current systems use simple reflector designs and plane filter windows to direct the light to the surface being exposed. Laser hair removal is achieved when these wavelengths at sufficient energy levels are applied to the epidermis. The laser energy is absorbed by the melanin (pigment) in the hair and hair follicle which in turn is transformed into heat. This heat creates the coagulation process, which causes the removal of the hair and prevents growth of new hair [1]. This paper outlines a technique of beam shaping that can be applied to a non-contact based hair removal system. Several features of the beam shaping technique including beam uniformity and heat dispersion across its operational treatment area will be analyzed. A beam shaper design and its fundamental testing will be discussed in detail.

  6. Polarized Light Microscopy

    NASA Technical Reports Server (NTRS)

    Frandsen, Athela F.

    2016-01-01

    Polarized light microscopy (PLM) is a technique which employs the use of polarizing filters to obtain substantial optical property information about the material which is being observed. This information can be combined with other microscopy techniques to confirm or elucidate the identity of an unknown material, determine whether a particular contaminant is present (as with asbestos analysis), or to provide important information that can be used to refine a manufacturing or chemical process. PLM was the major microscopy technique in use for identification of materials for nearly a century since its introduction in 1834 by William Fox Talbot, as other techniques such as SEM (Scanning Electron Microscopy), FTIR (Fourier Transform Infrared spectroscopy), XPD (X-ray Powder Diffraction), and TEM (Transmission Electron Microscopy) had not yet been developed. Today, it is still the only technique approved by the Environmental Protection Agency (EPA) for asbestos analysis, and is often the technique first applied for identification of unknown materials. PLM uses different configurations in order to determine different material properties. With each configuration additional clues can be gathered, leading to a conclusion of material identity. With no polarizing filter, the microscope can be used just as a stereo optical microscope, and view qualities such as morphology, size, and number of phases. With a single polarizing filter (single polars), additional properties can be established, such as pleochroism, individual refractive indices, and dispersion staining. With two polarizing filters (crossed polars), even more can be deduced: isotropy vs. anisotropy, extinction angle, birefringence/degree of birefringence, sign of elongation, and anomalous polarization colors, among others. With the use of PLM many of these properties can be determined in a matter of seconds, even for those who are not highly trained. McCrone, a leader in the field of polarized light microscopy, often advised, If you cant determine a specific optical property of a particle after two minutes, move onto another configuration. Since optical properties can be seen so very quickly and easily under polarized light, it is only necessary to spend a maximum of two minutes on a technique to determine a particular property, though often only a few seconds are required.

  7. Velocity navigator for motion compensated thermometry.

    PubMed

    Maier, Florian; Krafft, Axel J; Yung, Joshua P; Stafford, R Jason; Elliott, Andrew; Dillmann, Rüdiger; Semmler, Wolfhard; Bock, Michael

    2012-02-01

    Proton resonance frequency shift thermometry is sensitive to breathing motion that leads to incorrect phase differences. In this work, a novel velocity-sensitive navigator technique for triggering MR thermometry image acquisition is presented. A segmented echo planar imaging pulse sequence was modified for velocity-triggered temperature mapping. Trigger events were generated when the estimated velocity value was less than 0.2 cm/s during the slowdown phase in parallel to the velocity-encoding direction. To remove remaining high-frequency spikes from pulsation in real time, a Kalman filter was applied to the velocity navigator data. A phantom experiment with heating and an initial volunteer experiment without heating were performed to show the applicability of this technique. Additionally, a breath-hold experiment was conducted for comparison. A temperature rise of ΔT = +37.3°C was seen in the phantom experiment, and a root mean square error (RMSE) outside the heated region of 2.3°C could be obtained for periodic motion. In the volunteer experiment, a RMSE of 2.7°C/2.9°C (triggered vs. breath hold) was measured. A novel velocity navigator with Kalman filter postprocessing in real time significantly improves the temperature accuracy over non-triggered acquisitions and suggests being comparable to a breath-held acquisition. The proposed technique might be clinically applied for monitoring of thermal ablations in abdominal organs.

  8. A Structural and Content-Based Analysis for Web Filtering.

    ERIC Educational Resources Information Center

    Lee, P. Y.; Hui, S. C.; Fong, A. C. M.

    2003-01-01

    Presents an analysis of the distinguishing features of pornographic Web pages so that effective filtering techniques can be developed. Surveys the existing techniques for Web content filtering and describes the implementation of a Web content filtering system that uses an artificial neural network. (Author/LRW)

  9. Surface Plasmon Resonance Evaluation of Colloidal Metal Aerogel Filters

    NASA Technical Reports Server (NTRS)

    Smith, David D.; Sibille, Laurent; Cronise, Raymond J.; Noever, David A.

    1997-01-01

    We have fabricated aerogels containing gold, silver, and platinum nanoparticles for gas catalysis applications. By applying the concept of an average or effective dielectric constant to the heterogeneous interlayer surrounding each particle, we extend the technique of immersion spectroscopy to porous or heterogeneous media. Specifically, we apply the predominant effective medium theories for the determination of the average fractional composition of each component in this inhomogeneous layer. Hence, the surface area of metal available for catalytic gas reaction is determined. The technique is satisfactory for statistically random metal particle distributions but needs further modification for aggregated or surfactant modified systems. Additionally, the kinetics suggest that collective particle interactions in coagulated clusters are perturbed during silica gelation resulting in a change in the aggregate geometry.

  10. Edge Preserved Speckle Noise Reduction Using Integrated Fuzzy Filters

    PubMed Central

    Dewal, M. L.; Rohit, Manoj Kumar

    2014-01-01

    Echocardiographic images are inherent with speckle noise which makes visual reading and analysis quite difficult. The multiplicative speckle noise masks finer details, necessary for diagnosis of abnormalities. A novel speckle reduction technique based on integration of geometric, wiener, and fuzzy filters is proposed and analyzed in this paper. The denoising applications of fuzzy filters are studied and analyzed along with 26 denoising techniques. It is observed that geometric filter retains noise and, to address this issue, wiener filter is embedded into the geometric filter during iteration process. The performance of geometric-wiener filter is further enhanced using fuzzy filters and the proposed despeckling techniques are called integrated fuzzy filters. Fuzzy filters based on moving average and median value are employed in the integrated fuzzy filters. The performances of integrated fuzzy filters are tested on echocardiographic images and synthetic images in terms of image quality metrics. It is observed that the performance parameters are highest in case of integrated fuzzy filters in comparison to fuzzy and geometric-fuzzy filters. The clinical validation reveals that the output images obtained using geometric-wiener, integrated fuzzy, nonlocal means, and details preserving anisotropic diffusion filters are acceptable. The necessary finer details are retained in the denoised echocardiographic images. PMID:27437499

  11. Effects on Diagnostic Parameters After Removing Additional Synchronous Gear Meshes

    NASA Technical Reports Server (NTRS)

    Decker, Harry J.

    2003-01-01

    Gear cracks are typically difficult to diagnose with sufficient time before catastrophic damage occurs. Significant damage must be present before algorithms appear to be able to detect the damage. Frequently there are multiple gear meshes on a single shaft. Since they are all synchronous with the shaft frequency, the commonly used synchronous averaging technique is ineffective in removing other gear mesh effects. Carefully applying a filter to these extraneous gear mesh frequencies can reduce the overall vibration signal and increase the accuracy of commonly used vibration metrics. The vibration signals from three seeded fault tests were analyzed using this filtering procedure. Both the filtered and unfiltered vibration signals were then analyzed using commonly used fault detection metrics and compared. The tests were conducted on aerospace quality spur gears in a test rig. The tests were conducted at speeds ranging from 2500 to 5000 revolutions per minute and torques from 184 to 228 percent of design load. The inability to detect these cracks with high confidence results from the high loading which is causing fast fracture as opposed to stable crack growth. The results indicate that these techniques do not currently produce an indication of damage that significantly exceeds experimental scatter.

  12. A recursive solution for a fading memory filter derived from Kalman filter theory

    NASA Technical Reports Server (NTRS)

    Statman, J. I.

    1986-01-01

    A simple recursive solution for a class of fading memory tracking filters is presented. A fading memory filter provides estimates of filter states based on past measurements, similar to a traditional Kalman filter. Unlike a Kalman filter, an exponentially decaying weight is applied to older measurements, discounting their effect on present state estimates. It is shown that Kalman filters and fading memory filters are closely related solutions to a general least squares estimator problem. Closed form filter transfer functions are derived for a time invariant, steady state, fading memory filter. These can be applied in loop filter implementation of the Deep Space Network (DSN) Advanced Receiver carrier phase locked loop (PLL).

  13. Demonstration of a single-wavelength spectral-imaging-based Thai jasmine rice identification

    NASA Astrophysics Data System (ADS)

    Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan

    2011-07-01

    A single-wavelength spectral-imaging-based Thai jasmine rice breed identification is demonstrated. Our nondestructive identification approach relies on a combination of fluorescent imaging and simple image processing techniques. Especially, we apply simple image thresholding, blob filtering, and image subtracting processes to either a 545 or a 575nm image in order to identify our desired Thai jasmine rice breed from others. Other key advantages include no waste product and fast identification time. In our demonstration, UVC light is used as our exciting light, a liquid crystal tunable optical filter is used as our wavelength seclector, and a digital camera with 640activepixels×480activepixels is used to capture the desired spectral image. Eight Thai rice breeds having similar size and shape are tested. Our experimental proof of concept shows that by suitably applying image thresholding, blob filtering, and image subtracting processes to the selected fluorescent image, the Thai jasmine rice breed can be identified with measured false acceptance rates of <22.9% and <25.7% for spectral images at 545 and 575nm wavelengths, respectively. A measured fast identification time is 25ms, showing high potential for real-time applications.

  14. JPEG2000-coded image error concealment exploiting convex sets projections.

    PubMed

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  15. Development of multiple source data processing for structural analysis at a regional scale. [digital remote sensing in geology

    NASA Technical Reports Server (NTRS)

    Carrere, Veronique

    1990-01-01

    Various image processing techniques developed for enhancement and extraction of linear features, of interest to the structural geologist, from digital remote sensing, geologic, and gravity data, are presented. These techniques include: (1) automatic detection of linear features and construction of rose diagrams from Landsat MSS data; (2) enhancement of principal structural directions using selective filters on Landsat MSS, Spacelab panchromatic, and HCMM NIR data; (3) directional filtering of Spacelab panchromatic data using Fast Fourier Transform; (4) detection of linear/elongated zones of high thermal gradient from thermal infrared data; and (5) extraction of strong gravimetric gradients from digitized Bouguer anomaly maps. Processing results can be compared to each other through the use of a geocoded database to evaluate the structural importance of each lineament according to its depth: superficial structures in the sedimentary cover, or deeper ones affecting the basement. These image processing techniques were successfully applied to achieve a better understanding of the transition between Provence and the Pyrenees structural blocks, in southeastern France, for an improved structural interpretation of the Mediterranean region.

  16. Uncertainty analysis technique for OMEGA Dante measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J.; Widmann, K.; Sorce, C.

    2010-10-15

    The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  17. Uncertainty Analysis Technique for OMEGA Dante Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M J; Widmann, K; Sorce, C

    2010-05-07

    The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  18. Enhancement of Directional Ambiguity Removal Skill in Scatterometer Data Processing Using Planetary Boundary Layer Models

    NASA Technical Reports Server (NTRS)

    Kim, Young-Joon; Pak, Kyung S.; Dunbar, R. Scott; Hsiao, S. Vincent; Callahan, Philip S.

    2000-01-01

    Planetary boundary layer (PBL) models are utilized to enhance directional ambiguity removal skill in scatterometer data processing. The ambiguity in wind direction retrieved from scatterometer measurements is removed with the aid of physical directional information obtained from PBL models. This technique is based on the observation that sea level pressure is scalar and its field is more coherent than the corresponding wind. An initial wind field obtained from the scatterometer measurements is used to derive a pressure field with a PBL model. After filtering small-scale noise in the derived pressure field, a wind field is generated with an inverted PBL model. This derived wind information is then used to remove wind vector ambiguities in the scatterometer data. It is found that the ambiguity removal skill can be improved when the new technique is used properly in conjunction with the median filter being used for scatterometer wind dealiasing at JPL. The new technique is applied to regions of cyclone systems which are important for accurate weather prediction but where the errors of ambiguity removal are often large.

  19. Interface Engineering to Create a Strong Spin Filter Contact to Silicon

    NASA Astrophysics Data System (ADS)

    Caspers, C.; Gloskovskii, A.; Gorgoi, M.; Besson, C.; Luysberg, M.; Rushchanskii, K. Z.; Ležaić, M.; Fadley, C. S.; Drube, W.; Müller, M.

    2016-03-01

    Integrating epitaxial and ferromagnetic Europium Oxide (EuO) directly on silicon is a perfect route to enrich silicon nanotechnology with spin filter functionality. To date, the inherent chemical reactivity between EuO and Si has prevented a heteroepitaxial integration without significant contaminations of the interface with Eu silicides and Si oxides. We present a solution to this long-standing problem by applying two complementary passivation techniques for the reactive EuO/Si interface: (i) an in situ hydrogen-Si (001) passivation and (ii) the application of oxygen-protective Eu monolayers-without using any additional buffer layers. By careful chemical depth profiling of the oxide-semiconductor interface via hard x-ray photoemission spectroscopy, we show how to systematically minimize both Eu silicide and Si oxide formation to the sub-monolayer regime-and how to ultimately interface-engineer chemically clean, heteroepitaxial and ferromagnetic EuO/Si (001) in order to create a strong spin filter contact to silicon.

  20. 56Fe capture cross section experiments at the RPI LINAC Center

    NASA Astrophysics Data System (ADS)

    McDermott, Brian; Blain, Ezekiel; Thompson, Nicholas; Weltz, Adam; Youmans, Amanda; Danon, Yaron; Barry, Devin; Block, Robert; Daskalakis, Adam; Epping, Brian; Leinweber, Gregory; Rapp, Michael

    2017-09-01

    A new array of C6D6 detectors installed at the RPI LINAC Center has enabled the capability to measure neutron capture cross sections above the 847 keV inelastic scattering threshold of 56Fe through the use of digital post-processing filters and pulse-integral discriminators, without sacrificing the statistical quality of data at lower incident neutron energies where such filtering is unnecessary. The C6D6 detectors were used to perform time-of-flight capture cross section measurements on a sample 99.87% enriched iron-56. The total-energy method, combined with the pulse height weighting technique, were then applied to the raw data to determine the energy-dependent capture yield. Above the inelastic threshold, the data were analyzed with a pulse-integral filter to reveal the capture signal, extending the the full data set to 2 MeV.

  1. Solar generated quasi-biennial geomagnetic variation

    NASA Technical Reports Server (NTRS)

    Sugiura, M.; Poros, D. J.

    1977-01-01

    The existence of highly correlated quasi-biennial variations in the geomagnetic field and in solar activity is demonstrated. The analysis uses a numerical filter technique applied to monthly averages of the geomagnetic horizontal component and of the Zurich relative sunspot number. Striking correlations are found between the quasi-biennial geomagnetic variations determined from several magnetic observatories located at widely different longitudes, indicating a worldwide nature of the obtained variation. The correlation coefficient between the filtered Dst index and the filtered relative sunspot number is found to be -0.79 at confidence level greater than 99% with a time-lag of 4 months, with solar activity preceding the Dst variation. The correlation between the unfiltered data of Dst and of the sunspot number is also high with a similar time-lag. Such a timelag has not been discussed in the literature, and a further study is required to establish the mode of sun-earth relationship that gives this time delay.

  2. On the use of distributed sensing in control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C.; Ghosh, Dave

    1990-01-01

    Distributed processing technology is being developed to process signals from distributed sensors using distributed computations. Thiw work presents a scheme for calculating the operators required to emulate a conventional Kalman filter and regulator using such a computer. The scheme makes use of conventional Kalman theory as applied to the control of large flexible structures. The required computation of the distributed operators given the conventional Kalman filter and regulator is explained. A straightforward application of this scheme may lead to nonsmooth operators whose convergence is not apparent. This is illustrated by application to the Mini-Mast, a large flexible truss at the Langley Research Center used for research in structural dynamics and control. Techniques for developing smooth operators are presented. These involve spatial filtering as well as adjusting the design constants in the Kalman theory. Results are presented that illustrate the degree of smoothness achieved.

  3. Applying machine-learning techniques to Twitter data for automatic hazard-event classification.

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Bee, E. J.; Diaz-Doce, D.; Poole, J., Sr.; Singh, A.

    2017-12-01

    The constant flow of information offered by tweets provides valuable information about all sorts of events at a high temporal and spatial resolution. Over the past year we have been analyzing in real-time geological hazards/phenomenon, such as earthquakes, volcanic eruptions, landslides, floods or the aurora, as part of the GeoSocial project, by geo-locating tweets filtered by keywords in a web-map. However, not all the filtered tweets are related with hazard/phenomenon events. This work explores two classification techniques for automatic hazard-event categorization based on tweets about the "Aurora". First, tweets were filtered using aurora-related keywords, removing stop words and selecting the ones written in English. For classifying the remaining between "aurora-event" or "no-aurora-event" categories, we compared two state-of-art techniques: Support Vector Machine (SVM) and Deep Convolutional Neural Networks (CNN) algorithms. Both approaches belong to the family of supervised learning algorithms, which make predictions based on labelled training dataset. Therefore, we created a training dataset by tagging 1200 tweets between both categories. The general form of SVM is used to separate two classes by a function (kernel). We compared the performance of four different kernels (Linear Regression, Logistic Regression, Multinomial Naïve Bayesian and Stochastic Gradient Descent) provided by Scikit-Learn library using our training dataset to build the SVM classifier. The results shown that the Logistic Regression (LR) gets the best accuracy (87%). So, we selected the SVM-LR classifier to categorise a large collection of tweets using the "dispel4py" framework.Later, we developed a CNN classifier, where the first layer embeds words into low-dimensional vectors. The next layer performs convolutions over the embedded word vectors. Results from the convolutional layer are max-pooled into a long feature vector, which is classified using a softmax layer. The CNN's accuracy is lower (83%) than the SVM-LR, since the algorithm needs a bigger training dataset to increase its accuracy. We used TensorFlow framework for applying CNN classifier to the same collection of tweets.In future we will modify both classifiers to work with other geo-hazards, use larger training datasets and apply them in real-time.

  4. An improved method to set significance thresholds for β diversity testing in microbial community comparisons.

    PubMed

    Gülay, Arda; Smets, Barth F

    2015-09-01

    Exploring the variation in microbial community diversity between locations (β diversity) is a central topic in microbial ecology. Currently, there is no consensus on how to set the significance threshold for β diversity. Here, we describe and quantify the technical components of β diversity, including those associated with the process of subsampling. These components exist for any proposed β diversity measurement procedure. Further, we introduce a strategy to set significance thresholds for β diversity of any group of microbial samples using rarefaction, invoking the notion of a meta-community. The proposed technique was applied to several in silico generated operational taxonomic unit (OTU) libraries and experimental 16S rRNA pyrosequencing libraries. The latter represented microbial communities from different biological rapid sand filters at a full-scale waterworks. We observe that β diversity, after subsampling, is inflated by intra-sample differences; this inflation is avoided in the proposed method. In addition, microbial community evenness (Gini > 0.08) strongly affects all β diversity estimations due to bias associated with rarefaction. Where published methods to test β significance often fail, the proposed meta-community-based estimator is more successful at rejecting insignificant β diversity values. Applying our approach, we reveal the heterogeneous microbial structure of biological rapid sand filters both within and across filters. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.

  5. Energetic ion mass analysis using a radio-frequency quadrupole filter.

    PubMed

    Medley, S S

    1978-06-01

    In conventional applications of the radio-frequency quadrupole mass analyzer, the ion injection energy is usually limited to less than the order of 100 eV due to constraints on the dimensions and power supply of the device. However, requirements often arise, for example in fusion plasma ion diagnostics, for mass analysis of much more energetic ions. A technique easily adaptable to any conventional quadrupole analyzer which circumvents the limitation on injection energy is documented in this paper. Briefly, a retarding potential applied to the pole assembly is shown to facilitate mass analysis of multikiloelectron volt ions without altering the salient characteristics of either the quadrupole filter or the ion beam.

  6. Regenerative particulate filter development

    NASA Technical Reports Server (NTRS)

    Descamp, V. A.; Boex, M. W.; Hussey, M. W.; Larson, T. P.

    1972-01-01

    Development, design, and fabrication of a prototype filter regeneration unit for regenerating clean fluid particle filter elements by using a backflush/jet impingement technique are reported. Development tests were also conducted on a vortex particle separator designed for use in zero gravity environment. A maintainable filter was designed, fabricated and tested that allows filter element replacement without any leakage or spillage of system fluid. Also described are spacecraft fluid system design and filter maintenance techniques with respect to inflight maintenance for the space shuttle and space station.

  7. Maximally reliable spatial filtering of steady state visual evoked potentials.

    PubMed

    Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M

    2015-04-01

    Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis". Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.

  9. Diesel particulate filter with zoned resistive heater

    DOEpatents

    Gonze, Eugene V [Pinckney, MI

    2011-03-08

    A diesel particulate filter assembly comprises a diesel particulate filter (DPF) and a heater assembly. The DPF filters a particulate from exhaust produced by an engine. The heater assembly has a first metallic layer that is applied to the DPF, a resistive layer that is applied to the first metallic layer, and a second metallic layer that is applied to the resistive layer. The second metallic layer is etched to form a plurality of zones.

  10. A novel coupling of noise reduction algorithms for particle flow simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.

    2016-09-15

    Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less

  11. A Kalman Filter Implementation for Precision Improvement in Low-Cost GPS Positioning of Tractors

    PubMed Central

    Gomez-Gil, Jaime; Ruiz-Gonzalez, Ruben; Alonso-Garcia, Sergio; Gomez-Gil, Francisco Javier

    2013-01-01

    Low-cost GPS receivers provide geodetic positioning information using the NMEA protocol, usually with eight digits for latitude and nine digits for longitude. When these geodetic coordinates are converted into Cartesian coordinates, the positions fit in a quantization grid of some decimeters in size, the dimensions of which vary depending on the point of the terrestrial surface. The aim of this study is to reduce the quantization errors of some low-cost GPS receivers by using a Kalman filter. Kinematic tractor model equations were employed to particularize the filter, which was tuned by applying Monte Carlo techniques to eighteen straight trajectories, to select the covariance matrices that produced the lowest Root Mean Square Error in these trajectories. Filter performance was tested by using straight tractor paths, which were either simulated or real trajectories acquired by a GPS receiver. The results show that the filter can reduce the quantization error in distance by around 43%. Moreover, it reduces the standard deviation of the heading by 75%. Data suggest that the proposed filter can satisfactorily preprocess the low-cost GPS receiver data when used in an assistance guidance GPS system for tractors. It could also be useful to smooth tractor GPS trajectories that are sharpened when the tractor moves over rough terrain. PMID:24217355

  12. Improvement of sand filter and constructed wetland design using an environmental decision support system.

    PubMed

    Turon, Clàudia; Comas, Joaquim; Torrens, Antonina; Molle, Pascal; Poch, Manel

    2008-01-01

    With the aim of improving effluent quality of waste stabilization ponds, different designs of vertical flow constructed wetlands and intermittent sand filters were tested on an experimental full-scale plant within the framework of a European project. The information extracted from this study was completed and updated with heuristic and bibliographic knowledge. The data and knowledge acquired were difficult to integrate into mathematical models because they involve qualitative information and expert reasoning. Therefore, it was decided to develop an environmental decision support system (EDSS-Filter-Design) as a tool to integrate mathematical models and knowledge-based techniques. This paper describes the development of this support tool, emphasizing the collection of data and knowledge and representation of this information by means of mathematical equations and a rule-based system. The developed support tool provides the main design characteristics of filters: (i) required surface, (ii) media type, and (iii) media depth. These design recommendations are based on wastewater characteristics, applied load, and required treatment level data provided by the user. The results of the EDSS-Filter-Design provide appropriate and useful information and guidelines on how to design filters, according to the expert criteria. The encapsulation of the information into a decision support system reduces the design period and provides a feasible, reasoned, and positively evaluated proposal.

  13. Evaluating privacy-preserving record linkage using cryptographic long-term keys and multibit trees on large medical datasets.

    PubMed

    Brown, Adrian P; Borgs, Christian; Randall, Sean M; Schnell, Rainer

    2017-06-08

    Integrating medical data using databases from different sources by record linkage is a powerful technique increasingly used in medical research. Under many jurisdictions, unique personal identifiers needed for linking the records are unavailable. Since sensitive attributes, such as names, have to be used instead, privacy regulations usually demand encrypting these identifiers. The corresponding set of techniques for privacy-preserving record linkage (PPRL) has received widespread attention. One recent method is based on Bloom filters. Due to superior resilience against cryptographic attacks, composite Bloom filters (cryptographic long-term keys, CLKs) are considered best practice for privacy in PPRL. Real-world performance of these techniques using large-scale data is unknown up to now. Using a large subset of Australian hospital admission data, we tested the performance of an innovative PPRL technique (CLKs using multibit trees) against a gold-standard derived from clear-text probabilistic record linkage. Linkage time and linkage quality (recall, precision and F-measure) were evaluated. Clear text probabilistic linkage resulted in marginally higher precision and recall than CLKs. PPRL required more computing time but 5 million records could still be de-duplicated within one day. However, the PPRL approach required fine tuning of parameters. We argue that increased privacy of PPRL comes with the price of small losses in precision and recall and a large increase in computational burden and setup time. These costs seem to be acceptable in most applied settings, but they have to be considered in the decision to apply PPRL. Further research on the optimal automatic choice of parameters is needed.

  14. Optical monitor for observing turbulent flow

    DOEpatents

    Albrecht, Georg F.; Moore, Thomas R.

    1992-01-01

    The present invention provides an apparatus and method for non-invasively monitoring turbulent fluid flows including anisotropic flows. The present invention uses an optical technique to filter out the rays travelling in a straight line, while transmitting rays with turbulence induced fluctuations in time. The output is two dimensional, and can provide data regarding the spectral intensity distribution, or a view of the turbulence in real time. The optical monitor of the present invention comprises a laser that produces a coherent output beam that is directed through a fluid flow, which phase-modulates the beam. The beam is applied to a temporal filter that filters out the rays in the beam that are straight, while substantially transmitting the fluctuating, turbulence-induced rays. The temporal filter includes a lens and a photorefractive crystal such as BaTiO.sub.3 that is positioned in the converging section of the beam near the focal plane. An imaging system is used to observe the filtered beam. The imaging system may take a photograph, or it may include a real time camera that is connected to a computer. The present invention may be used for many purposes including research and design in aeronautics, hydrodynamics, and combustion.

  15. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  16. Quantitative filter forensics for indoor particle sampling.

    PubMed

    Haaland, D; Siegel, J A

    2017-03-01

    Filter forensics is a promising indoor air investigation technique involving the analysis of dust which has collected on filters in central forced-air heating, ventilation, and air conditioning (HVAC) or portable systems to determine the presence of indoor particle-bound contaminants. In this study, we summarize past filter forensics research to explore what it reveals about the sampling technique and the indoor environment. There are 60 investigations in the literature that have used this sampling technique for a variety of biotic and abiotic contaminants. Many studies identified differences between contaminant concentrations in different buildings using this technique. Based on this literature review, we identified a lack of quantification as a gap in the past literature. Accordingly, we propose an approach to quantitatively link contaminants extracted from HVAC filter dust to time-averaged integrated air concentrations. This quantitative filter forensics approach has great potential to measure indoor air concentrations of a wide variety of particle-bound contaminants. Future studies directly comparing quantitative filter forensics to alternative sampling techniques are required to fully assess this approach, but analysis of past research suggests the enormous possibility of this approach. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Reducing multi-sensor data to a single time course that reveals experimental effects

    PubMed Central

    2013-01-01

    Background Multi-sensor technologies such as EEG, MEG, and ECoG result in high-dimensional data sets. Given the high temporal resolution of such techniques, scientific questions very often focus on the time-course of an experimental effect. In many studies, researchers focus on a single sensor or the average over a subset of sensors covering a “region of interest” (ROI). However, single-sensor or ROI analyses ignore the fact that the spatial focus of activity is constantly changing, and fail to make full use of the information distributed over the sensor array. Methods We describe a technique that exploits the optimality and simplicity of matched spatial filters in order to reduce experimental effects in multivariate time series data to a single time course. Each (multi-sensor) time sample of each trial is replaced with its projection onto a spatial filter that is matched to an observed experimental effect, estimated from the remaining trials (Effect-Matched Spatial filtering, or EMS filtering). The resulting set of time courses (one per trial) can be used to reveal the temporal evolution of an experimental effect, which distinguishes this approach from techniques that reveal the temporal evolution of an anatomical source or region of interest. Results We illustrate the technique with data from a dual-task experiment and use it to track the temporal evolution of brain activity during the psychological refractory period. We demonstrate its effectiveness in separating the means of two experimental conditions, and in significantly improving the signal-to-noise ratio at the single-trial level. It is fast to compute and results in readily-interpretable time courses and topographies. The technique can be applied to any data-analysis question that can be posed independently at each sensor, and we provide one example, using linear regression, that highlights the versatility of the technique. Conclusion The approach described here combines established techniques in a way that strikes a balance between power, simplicity, speed of processing, and interpretability. We have used it to provide a direct view of parallel and serial processes in the human brain that previously could only be measured indirectly. An implementation of the technique in MatLab is freely available via the internet. PMID:24125590

  18. Gaussian pre-filtering for uncertainty minimization in digital image correlation using numerically-designed speckle patterns

    NASA Astrophysics Data System (ADS)

    Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo

    2015-03-01

    This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.

  19. Retrievable Inferior Vena Cava Filters in Trauma Patients: Prevalence and Management of Thrombus Within the Filter.

    PubMed

    Pan, Y; Zhao, J; Mei, J; Shao, M; Zhang, J; Wu, H

    2016-12-01

    The incidence of thrombus was investigated within retrievable filters placed in trauma patients with confirmed DVT at the time of retrieval and the optimal treatment for this clinical scenario was assessed. A technique called "filter retrieval with manual negative pressure aspiration thrombectomy" for management of filter thrombus was introduced and assessed. The retrievable filters referred for retrieval between January 2008 and December 2015 were retrospectively reviewed to determine the incidence of filter thrombus on a pre-retrieval cavogram. The clinical outcomes of different managements for thrombus within filters were recorded and analyzed. During the study 764 patients having Aegisy Filters implanted were referred for filter removal, from which 236 cases (134 male patients, mean age 50.2 years) of thrombus within the filter were observed on initial pre-retrieval IVC venogram 12-39 days after insertion (average 16.9 days). The incidence of infra-filter thrombus was 30.9%, and complete occlusion of the filter bearing IVC was seen in 2.4% (18) of cases. Retrieval was attempted in all 121 cases with small clots using a regular snare and sheath technique, and was successful in 120. A total of 116 cases with massive thrombus and IVC occlusion by thrombus were treated by CDT and/or the new retrieval technique. Overall, 213 cases (90.3%) of thrombus in the filter were removed successfully without PE. A small thrombus within the filter can be safely removed without additional management. CDT for reduction of the clot burden in filters was effective and safe. Filter retrieval with manual negative pressure aspiration thrombectomy seemed reasonable and valuable for management of massive thrombus within filters in some patients. Full assessment of the value and safety of this technique requires additional studies. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  20. Space-Wise approach for airborne gravity data modelling

    NASA Astrophysics Data System (ADS)

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2017-05-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data retrieving the gravitational signal with a predicted accuracy of about 0.4 mGal.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, K; Barbarits, J; Humenik, R

    Purpose: Chang’s mathematical formulation is a common method of attenuation correction applied on reconstructed Jaszczak phantom images. Though Chang’s attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor’s camera software producing artifacts. The objective of this work is to ensure that Chang’s attenuation correction technique can be applied for reconstructed Jaszczak phantom images acquired in both 360° and 180° mode. Methods: The Jaszczak phantom filled with 20 mCi of diluted Tc-99m was placed on the patient table of Siemens e.cam™ (n = 2) and Siemens Symbia™ (nmore » = 1) dual head gamma cameras centered both in lateral and axial directions. A total of 3 scans were done at 180° and 2 scans at 360° orbit acquisition modes. Thirty two million counts were acquired for both modes. Reconstruction of the projection data was performed using filtered back projection smoothed with pre reconstruction Butterworth filter (order: 6, cutoff: 0.55). Reconstructed transaxial slices were attenuation corrected by Chang’s attenuation correction technique as implemented in the camera software. Corrections were also done using a modified technique where photon path lengths for all possible attenuation paths through a pixel in the image space were added to estimate the corresponding attenuation factor. The inverse of the attenuation factor was utilized to correct the attenuated pixel counts. Results: Comparable uniformity and noise were observed for 360° acquired phantom images attenuation corrected by the vendor technique (28.3% and 7.9%) and the proposed technique (26.8% and 8.4%). The difference in uniformity for 180° acquisition between the proposed technique (22.6% and 6.8%) and the vendor technique (57.6% and 30.1%) was more substantial. Conclusion: Assessment of attenuation correction performance by phantom uniformity analysis illustrated improved uniformity with the proposed algorithm compared to the camera software.« less

  2. Wavelet Transform Based Filter to Remove the Notches from Signal Under Harmonic Polluted Environment

    NASA Astrophysics Data System (ADS)

    Das, Sukanta; Ranjan, Vikash

    2017-12-01

    The work proposes to annihilate the notches present in the synchronizing signal required for converter operation appearing due to switching of semiconductor devices connected to the system in the harmonic polluted environment. The disturbances in the signal are suppressed by wavelet based novel filtering technique. In the proposed technique, the notches in the signal are determined and eliminated by the wavelet based multi-rate filter using `Daubechies4' (db4) as mother wavelet. The computational complexity of the adapted technique is very less as compared to any other conventional notch filtering techniques. The proposed technique is developed in MATLAB/Simulink and finally validated with dSPACE-1103 interface. The recovered signal, thus obtained, is almost free of the notches.

  3. Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong

    2018-06-01

    The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.

  4. Multichannel Networked Phasemeter Readout and Analysis

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    Netmeter software reads a data stream from up to 250 networked phasemeters, synchronizes the data, saves the reduced data to disk (after applying a low-pass filter), and provides a Web server interface for remote control. Unlike older phasemeter software that requires a special, real-time operating system, this program can run on any general-purpose computer. It needs about five percent of the CPU (central processing unit) to process 20 channels because it adds built-in data logging and network-based GUIs (graphical user interfaces) that are implemented in Scalable Vector Graphics (SVG). Netmeter runs on Linux and Windows. It displays the instantaneous displacements measured by several phasemeters at a user-selectable rate, up to 1 kHz. The program monitors the measure and reference channel frequencies. For ease of use, levels of status in Netmeter are color coded: green for normal operation, yellow for network errors, and red for optical misalignment problems. Netmeter includes user-selectable filters up to 4 k samples, and user-selectable averaging windows (after filtering). Before filtering, the program saves raw data to disk using a burst-write technique.

  5. A Comparison of Retrievability: Celect versus Option Filter.

    PubMed

    Ryu, Robert K; Desai, Kush; Karp, Jennifer; Gupta, Ramona; Evans, Alan Emerson; Rajeswaran, Shankar; Salem, Riad; Lewandowski, Robert J

    2015-06-01

    To compare the retrievability of 2 potentially retrievable inferior vena cava filter devices. A retrospective, institutional review board-approved study of Celect (Cook, Inc, Bloomington, Indiana) and Option (Rex Medical, Conshohocken, Pennsylvania) filters was conducted over a 33-month period at a single institution. Fluoroscopy time, significant filter tilt, use of adjunctive retrieval technique, and strut perforation in the inferior vena cava were recorded on retrieval. Fisher exact test and Mann-Whitney-Wilcoxon test were used for comparison. There were 99 Celect and 86 Option filters deployed. After an average of 2.09 months (range, 0.3-7.6 mo) and 1.94 months (range, 0.47-9.13 mo), respectively, 59% (n = 58) of patients with Celect filters and 74.7% (n = 65) of patients with Option filters presented for filter retrieval. Retrieval failure rates were 3.4% for Celect filters versus 7.7% for Option filters (P = .45). Median fluoroscopy retrieval times were 4.25 minutes for Celect filters versus 6 minutes for Option filters (P = .006). Adjunctive retrieval techniques were used in 5.4% of Celect filter retrievals versus 18.3% of Option filter retrievals (P = .045). The incidence of significant tilting was 8.9% for Celect filters versus 16.7% for Option filters (P = .27). The incidence of strut perforation was 43% for Celect filters versus 0% for Option filters (P < .0001). Retrieval rates for the Celect and Option filters were not significantly different. However, retrieval of the Option filter required a significantly increased amount of fluoroscopy time compared with the Celect filter, and there was a significantly greater usage of adjunctive retrieval techniques for the Option filter. The Celect filter had a significantly higher rate of strut perforation. Copyright © 2015 SIR. Published by Elsevier Inc. All rights reserved.

  6. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI

    PubMed Central

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg). Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency. PMID:29497372

  7. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI.

    PubMed

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg) . Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.

  8. GNU Radio Sandia Utilities v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Jacob; Knee, Peter

    This software adds a data handling module to the GNU Radio (GR) software defined radio (SDR) framework as well as some general-purpose function blocks (filters, metadata control, etc). This software is useful for processing bursty RF transmissions with GR, and serves as a base for applying SDR signal processing techniques to a whole burst of data at a time, as opposed to streaming data which GR has been primarily focused around.

  9. Online Detection of Broken Rotor Bar Fault in Induction Motors by Combining Estimation of Signal Parameters via Min-norm Algorithm and Least Square Method

    NASA Astrophysics Data System (ADS)

    Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin

    2017-11-01

    Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.

  10. Retrieval of Tip-embedded Inferior Vena Cava Filters by Using the Endobronchial Forceps Technique: Experience at a Single Institution.

    PubMed

    Stavropoulos, S William; Ge, Benjamin H; Mondschein, Jeffrey I; Shlansky-Goldberg, Richard D; Sudheendra, Deepak; Trerotola, Scott O

    2015-06-01

    To evaluate the use of endobronchial forceps to retrieve tip-embedded inferior vena cava (IVC) filters. This institutional review board-approved, HIPAA-compliant retrospective study included 114 patients who presented with tip-embedded IVC filters for removal from January 2005 to April 2014. The included patients consisted of 77 women and 37 men with a mean age of 43 years (range, 18-79 years). Filters were identified as tip embedded by using rotational venography. Rigid bronchoscopy forceps were used to dissect the tip or hook of the filter from the wall of the IVC. The filter was then removed through the sheath by using the endobronchial forceps. Statistical analysis entailed calculating percentages, ranges, and means. The endobronchial forceps technique was used to successfully retrieve 109 of 114 (96%) tip-embedded IVC filters on an intention-to-treat basis. Five failures occurred in four patients in whom the technique was attempted but failed and one patient in whom retrieval was not attempted. Filters were in place for a mean of 465 days (range, 31-2976 days). The filters in this study included 10 Recovery, 33 G2, eight G2X, 11 Eclipse, one OptEase, six Option, 13 Günther Tulip, one ALN, and 31 Celect filters. Three minor complications and one major complication occurred, with no permanent sequelae. The endobronchial forceps technique can be safely used to remove tip-embedded IVC filters. © RSNA, 2014.

  11. High-speed railway signal trackside equipment patrol inspection system

    NASA Astrophysics Data System (ADS)

    Wu, Nan

    2018-03-01

    High-speed railway signal trackside equipment patrol inspection system comprehensively applies TDI (time delay integration), high-speed and highly responsive CMOS architecture, low illumination photosensitive technique, image data compression technique, machine vision technique and so on, installed on high-speed railway inspection train, and achieves the collection, management and analysis of the images of signal trackside equipment appearance while the train is running. The system will automatically filter out the signal trackside equipment images from a large number of the background image, and identify of the equipment changes by comparing the original image data. Combining with ledger data and train location information, the system accurately locate the trackside equipment, conscientiously guiding maintenance.

  12. Manufacture and calibration of optical supersmooth roughness artifacts for intercomparisons

    NASA Astrophysics Data System (ADS)

    Ringel, Gabriele A.; Kratz, Frank; Schmitt, Dirk-Roger; Mangelsdorf, Juergen; Creuzet, Francois; Garratt, John D.

    1995-09-01

    Intercomparison roughness measurements have been carried out on supersmooth artifacts fabricated from BK7, fused silica, and Zerodur. The surface parameters were determined using the optical heterodyne profiler Z5500 (Zygo), a special prototype of the mechanical profiler Nanostep (Rank Taylor Hobson), and an Atomic Force Microscope (Park Scientific Instruments) with an improved acquisition technique. The intercomparison was performed after the range of collected spatial wavelengths for each instrument was adjusted using digital filtering techniques. It is demonstrated for different roughness ranges that the applied superpolishing techniques yield supersmooth artifacts which can be used for more intercomparisons. More than 100 samples were investigated. Criteria were developed to select artifacts from the sample stock.

  13. Signal-to-noise ratio estimation on SEM images using cubic spline interpolation with Savitzky-Golay smoothing.

    PubMed

    Sim, K S; Kiani, M A; Nia, M E; Tso, C P

    2014-01-01

    A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  14. Vapor purification with self-cleaning filter

    DOEpatents

    Josephson, Gary B.; Heath, William O.; Aardahl, Christopher L.

    2003-12-09

    A vapor filtration device including a first electrode, a second electrode, and a filter between the first and second electrodes is disclosed. The filter is formed of dielectric material and the device is operated by applying a first electric potential between the electrodes to polarize the dielectric material such that upon passing a vapor stream through the filter, particles from the vapor stream are deposited onto the filter. After depositing the particles a second higher voltage is applied between the electrodes to form a nonthermal plasma around the filter to vaporize the collected particles thereby cleaning the filter. The filter can be a packed bed or serpentine filter mat, and an optional upstream corona wire can be utilized to charge airborne particles prior to their deposition on the filter.

  15. a Computer Simulation Study of Coherent Optical Fibre Communication Systems

    NASA Astrophysics Data System (ADS)

    Urey, Zafer

    Available from UMI in association with The British Library. A computer simulation study of coherent optical fibre communication systems is presented in this thesis. The Wiener process is proposed as the simulation model of laser phase noise and verified to be a good one. This model is included in the simulation experiments along with the other noise sources (i.e shot noise, thermal noise and laser intensity noise) and the models that represent the various waveform processing blocks in a system such as filtering, demodulation, etc. A novel mixed-semianalytical simulation procedure is designed and successfully applied for the estimation of bit error rates as low as 10^{-10 }. In this technique the noise processes and the ISI effects at the decision time are characterized from simulation experiments but the calculation of the probability of error is obtained by numerically integrating the noise statistics over the error region using analytical expressions. Simulation of only 4096 bits is found to give estimates of BER's corresponding to received optical power within 1 dB of the theoretical calculations using this approach. This number is very small when compared with the pure simulation techniques. Hence, the technique is proved to be very efficient in terms of the computation time and the memory requirements. A command driven simulation software which runs on a DEC VAX computer under the UNIX operating system is written by the author and a series of simulation experiments are carried out using this software. In particular, the effects of IF filtering on the performance of PSK heterodyne receivers with synchronous demodulation are examined when both the phase noise and the shot noise are included in the simulations. The BER curves of this receiver are estimated for the first time for various cases of IF filtering using the mixed-semianalytical approach. At a power penalty of 1 dB the IF linewidth requirement of this receiver with the matched filter is estimated to be less than 650 kHz at the modulation rate of 1 Gbps and BER of 10 ^{-9}. The IF linewidth requirement for other IF filtering cases are also estimated. The results are not found to be much different from the matched filter case. Therefore, it is concluded that IF filtering does not have any effect for the reduction of phase noise in PSK heterodyne systems with synchronous demodulation.

  16. Optical filter highlighting spectral features part II: quantitative measurements of cosmetic foundation and assessment of their spatial distributions under realistic facial conditions.

    PubMed

    Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki

    2011-03-28

    We previously proposed a filter that could detect cosmetic foundations with high discrimination accuracy [Opt. Express 19, 6020 (2011)]. This study extends the filter's functionality to the quantification of the amount of foundation and applies the filter for the assessment of spatial distributions of foundation under realistic facial conditions. Human faces that are applied with quantitatively controlled amounts of cosmetic foundations were measured using the filter. A calibration curve between pixel values of the image and the amount of foundation was created. The optical filter was applied to visualize spatial foundation distributions under realistic facial conditions, which clearly indicated areas on the face where foundation remained even after cleansing. Results confirm that the proposed filter could visualize and nondestructively inspect the foundation distributions.

  17. Measurement of Ambient Air Motion of D. I. Gasoline Spray by LIF-PIV

    NASA Astrophysics Data System (ADS)

    Yamakawa, Masahisa; Isshiki, Seiji; Yoshizaki, Takuo; Nishida, Keiya

    Ambient air velocity distributions in and around a D. I. gasoline spray were measured using a combination of LIF and PIV techniques. A rhodamine and water solution was injected into ambient air to disperse the fine fluorescent liquid particles used as tracers. A fuel spray was injected into the fluorescent tracer cloud and was illuminated by an Nd: YAG laser light sheet (532nm). The scattered light from the spray droplets and tracers was cut off by a high-pass filter (>560nm). As the fluorescence (>600nm) was transmitted through the high-pass filter, the tracer images were captured using a CCD camera and the ambient air velocity distribution could be obtained by PIV based on the images. This technique was applied to a D. I. gasoline spray. The ambient air flowed up around the spray and entered into the tail of the spray. Furthermore, the relative velocity between the spray and ambient air was investigated.

  18. Application of the EM algorithm to radiographic images.

    PubMed

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  19. Color enhancement of landsat agricultural imagery: JPL LACIE image processing support task

    NASA Technical Reports Server (NTRS)

    Madura, D. P.; Soha, J. M.; Green, W. B.; Wherry, D. B.; Lewis, S. D.

    1978-01-01

    Color enhancement techniques were applied to LACIE LANDSAT segments to determine if such enhancement can assist analysis in crop identification. The procedure involved increasing the color range by removing correlation between components. First, a principal component transformation was performed, followed by contrast enhancement to equalize component variances, followed by an inverse transformation to restore familiar color relationships. Filtering was applied to lower order components to reduce color speckle in the enhanced products. Use of single acquisition and multiple acquisition statistics to control the enhancement were compared, and the effects of normalization investigated. Evaluation is left to LACIE personnel.

  20. Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement

    NASA Astrophysics Data System (ADS)

    Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.

    In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.

  1. Denali, Tulip, and Option Inferior Vena Cava Filter Retrieval: A Single Center Experience.

    PubMed

    Ramaswamy, Raja S; Jun, Emily; van Beek, Darren; Mani, Naganathan; Salter, Amber; Kim, Seung K; Akinwande, Olaguoke

    2018-04-01

    To compare the technical success of filter retrieval in Denali, Tulip, and Option inferior vena cava filters. A retrospective analysis of Denali, Gunther Tulip, and Option IVC filters was conducted. Retrieval failure rates, fluoroscopy time, sedation time, use of advanced retrieval techniques, and filter-related complications that led to retrieval failure were recorded. There were 107 Denali, 43 Option, and 39 Tulip filters deployed and removed with average dwell times of 93.5, 86.0, and 131 days, respectively. Retrieval failure rates were 0.9% for Denali, 11.6% for Option, and 5.1% for Tulip filters (Denali vs. Option p = 0.018; Denali vs. Tulip p = 0.159; Tulip vs. Option p = 0.045). Median fluoroscopy time for filter retrieval was 3.2 min for the Denali filter, 6.75 min for the Option filter, and 4.95 min for the Tulip filter (Denali vs. Option p < 0.01; Denali vs. Tulip p < 0.01; Tulip vs. Option p = 0.67). Advanced retrieval techniques were used in 0.9% of Denali filters, 21.1% in Option filters, and 10.8% in Tulip filters (Denali vs. Option p < 0.01; Denali vs. Tulip p < 0.01; Tulip vs. Option p < 0.01). Filter retrieval failure rates were significantly higher for the Option filter when compared to both the Denali and Tulip filters. Retrieval of the Denali filter required significantly less amount of fluoroscopy time and use of advanced retrieval techniques when compared to both the Option and Tulip filters. The findings of this study indicate easier retrieval of the Denali and Tulip IVC filters when compared to the Option filter.

  2. An improved discriminative filter bank selection approach for motor imagery EEG signal classification using mutual information.

    PubMed

    Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko

    2017-12-28

    Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.

  3. Speeding Up the Bilateral Filter: A Joint Acceleration Way.

    PubMed

    Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng

    2016-06-01

    Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

  4. Tachykinin-induced nasal fluid secretion and plasma exudation in the rat: effects of peptidase inhibition.

    PubMed

    Lindell, E; Svensjö, M E; Malm, L; Petersson, G

    1995-05-01

    Substance P (SP) evokes fluid secretion and plasma extravasation when applied to the nasal mucosa of rats. SP and another tachykinin, neurokinin A (NKA), are degraded in vitro by neutral endopeptidase (NEP) and angiotensin-1-converting enzyme (ACE). In this study, NKA or SP were applied locally to the nasal mucosa of rats. Subsequent fluid secretion was measured by a filter paper technique. Plasma exudation was derived as the recovery of intravenous (i.v.) administered 125I-albumin from the fluid-containing filter papers. In order to inhibit enzymatic degradation of the tachykinins by NEP and ACE, the rats were treated with i.v. administered phosphoramidon or captopril respectively or their combination. SP evoked fluid secretion that was augmented by phosphoramidon and further enhanced by adding captopril. NKA evoked nasal fluid secretion less effectively than SP and the effect was unaffected by peptidase inhibition. SP, but not NKA, evoked increased plasma exudation but only after pre-treatment with phosphoramidon.

  5. Metallic artifact mitigation and organ-constrained tissue assignment for Monte Carlo calculations of permanent implant lung brachytherapy.

    PubMed

    Sutherland, J G H; Miksys, N; Furutani, K M; Thomson, R M

    2014-01-01

    To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxel and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for (125)I, (103)Pd, and (131)Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for (103)Pd seeds and smallest but still considerable differences for (131)Cs seeds. Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.

  6. Disturbance Accommodating Adaptive Control with Application to Wind Turbines

    NASA Technical Reports Server (NTRS)

    Frost, Susan

    2012-01-01

    Adaptive control techniques are well suited to applications that have unknown modeling parameters and poorly known operating conditions. Many physical systems experience external disturbances that are persistent or continually recurring. Flexible structures and systems with compliance between components often form a class of systems that fail to meet standard requirements for adaptive control. For these classes of systems, a residual mode filter can restore the ability of the adaptive controller to perform in a stable manner. New theory will be presented that enables adaptive control with accommodation of persistent disturbances using residual mode filters. After a short introduction to some of the control challenges of large utility-scale wind turbines, this theory will be applied to a high-fidelity simulation of a wind turbine.

  7. Angular filter refractometry analysis using simulated annealing.

    PubMed

    Angland, P; Haberberger, D; Ivancic, S T; Froula, D H

    2017-10-01

    Angular filter refractometry (AFR) is a novel technique used to characterize the density profiles of laser-produced, long-scale-length plasmas [Haberberger et al., Phys. Plasmas 21, 056304 (2014)]. A new method of analysis for AFR images was developed using an annealing algorithm to iteratively converge upon a solution. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on the minimization of the χ 2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in an average uncertainty in the density profile of 5%-20% in the region of interest.

  8. Seven-hour fluorescence in situ hybridization technique for enumeration of Enterobacteriaceae in food and environmental water sample.

    PubMed

    Ootsubo, M; Shimizu, T; Tanaka, R; Sawabe, T; Tajima, K; Ezura, Y

    2003-01-01

    A fluorescent in situ hybridization (FISH) technique using an Enterobacteriaceae-specific probe (probe D) to target 16S rRNA was improved in order to enumerate, within a single working day, Enterobacteriaceae present in food and environmental water samples. In order to minimize the time required for the FISH procedure, each step of FISH with probe D was re-evaluated using cultured Escherichia coli. Five minutes of ethanol treatment for cell fixation and hybridization were sufficient to visualize cultured E. coli, and FISH could be performed within 1 h. Because of the difficulties in detecting low levels of bacterial cells by FISH without cultivation, a FISH technique for detecting microcolonies on membrane filters was investigated to improve the bacterial detection limit. FISH with probe D following 6 h of cultivation to grow microcolonies on a 13 mm diameter membrane filter was performed, and whole Enterobacteriaceae microcolonies on the filter were then detected and enumerated by manual epifluorescence microscopic scanning at magnification of x100 in ca 5 min. The total time for FISH with probe D following cultivation (FISHFC) was reduced to within 7 h. FISHFC can be applied to enumerate cultivable Enterobacteriaceae in food (above 100 cells g-1) and environmental water samples (above 1 cell ml-1). Cultivable Enterobacteriaceae in food and water samples were enumerated accurately within 7 h using the FISHFC method. A FISHFC method capable of evaluating Enterobacteriaceae contamination in food and environmental water within a single working day was developed.

  9. Symmetric Phase Only Filtering for Improved DPIV Data Processing

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    2006-01-01

    The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."

  10. Application of a modified complementary filtering technique for increased aircraft control system frequency bandwidth in high vibration environment

    NASA Technical Reports Server (NTRS)

    Garren, J. F., Jr.; Niessen, F. R.; Abbott, T. S.; Yenni, K. R.

    1977-01-01

    A modified complementary filtering technique for estimating aircraft roll rate was developed and flown in a research helicopter to determine whether higher gains could be achieved. Use of this technique did, in fact, permit a substantial increase in system frequency bandwidth because, in comparison with first-order filtering, it reduced both noise amplification and control limit-cycle tendencies.

  11. Angular Multigrid Preconditioner for Krylov-Based Solution Techniques Applied to the Sn Equations with Highly Forward-Peaked Scattering

    NASA Astrophysics Data System (ADS)

    Turcksin, Bruno; Ragusa, Jean C.; Morel, Jim E.

    2012-01-01

    It is well known that the diffusion synthetic acceleration (DSA) methods for the Sn equations become ineffective in the Fokker-Planck forward-peaked scattering limit. In response to this deficiency, Morel and Manteuffel (1991) developed an angular multigrid method for the 1-D Sn equations. This method is very effective, costing roughly twice as much as DSA per source iteration, and yielding a maximum spectral radius of approximately 0.6 in the Fokker-Planck limit. Pautz, Adams, and Morel (PAM) (1999) later generalized the angular multigrid to 2-D, but it was found that the method was unstable with sufficiently forward-peaked mappings between the angular grids. The method was stabilized via a filtering technique based on diffusion operators, but this filtering also degraded the effectiveness of the overall scheme. The spectral radius was not bounded away from unity in the Fokker-Planck limit, although the method remained more effective than DSA. The purpose of this article is to recast the multidimensional PAM angular multigrid method without the filtering as an Sn preconditioner and use it in conjunction with the Generalized Minimal RESidual (GMRES) Krylov method. The approach ensures stability and our computational results demonstrate that it is also significantly more efficient than an analogous DSA-preconditioned Krylov method.

  12. Development of a hybrid image processing algorithm for automatic evaluation of intramuscular fat content in beef M. longissimus dorsi.

    PubMed

    Du, Cheng-Jin; Sun, Da-Wen; Jackman, Patrick; Allen, Paul

    2008-12-01

    An automatic method for estimating the content of intramuscular fat (IMF) in beef M. longissimus dorsi (LD) was developed using a sequence of image processing algorithm. To extract IMF particles within the LD muscle from structural features of intermuscular fat surrounding the muscle, three steps of image processing algorithm were developed, i.e. bilateral filter for noise removal, kernel fuzzy c-means clustering (KFCM) for segmentation, and vector confidence connected and flood fill for IMF extraction. The technique of bilateral filtering was firstly applied to reduce the noise and enhance the contrast of the beef image. KFCM was then used to segment the filtered beef image into lean, fat, and background. The IMF was finally extracted from the original beef image by using the techniques of vector confidence connected and flood filling. The performance of the algorithm developed was verified by correlation analysis between the IMF characteristics and the percentage of chemically extractable IMF content (P<0.05). Five IMF features are very significantly correlated with the fat content (P<0.001), including count densities of middle (CDMiddle) and large (CDLarge) fat particles, area densities of middle and large fat particles, and total fat area per unit LD area. The highest coefficient is 0.852 for CDLarge.

  13. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  14. Learning-based image preprocessing for robust computer-aided detection

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Devarakota, Pandu R.; Wolf, Matthias

    2013-03-01

    Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.

  15. Community Detection for Correlation Matrices

    NASA Astrophysics Data System (ADS)

    MacMahon, Mel; Garlaschelli, Diego

    2015-04-01

    A challenging problem in the study of complex systems is that of resolving, without prior information, the emergent, mesoscopic organization determined by groups of units whose dynamical activity is more strongly correlated internally than with the rest of the system. The existing techniques to filter correlations are not explicitly oriented towards identifying such modules and can suffer from an unavoidable information loss. A promising alternative is that of employing community detection techniques developed in network theory. Unfortunately, this approach has focused predominantly on replacing network data with correlation matrices, a procedure that we show to be intrinsically biased because of its inconsistency with the null hypotheses underlying the existing algorithms. Here, we introduce, via a consistent redefinition of null models based on random matrix theory, the appropriate correlation-based counterparts of the most popular community detection techniques. Our methods can filter out both unit-specific noise and system-wide dependencies, and the resulting communities are internally correlated and mutually anticorrelated. We also implement multiresolution and multifrequency approaches revealing hierarchically nested subcommunities with "hard" cores and "soft" peripheries. We apply our techniques to several financial time series and identify mesoscopic groups of stocks which are irreducible to a standard, sectorial taxonomy; detect "soft stocks" that alternate between communities; and discuss implications for portfolio optimization and risk management.

  16. Aircraft applications of fault detection and isolation techniques

    NASA Astrophysics Data System (ADS)

    Marcos Esteban, Andres

    In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.

  17. Solution to the spectral filter problem of residual terrain modelling (RTM)

    NASA Astrophysics Data System (ADS)

    Rexer, Moritz; Hirt, Christian; Bucha, Blažej; Holmes, Simon

    2018-06-01

    In physical geodesy, the residual terrain modelling (RTM) technique is frequently used for high-frequency gravity forward modelling. In the RTM technique, a detailed elevation model is high-pass-filtered in the topography domain, which is not equivalent to filtering in the gravity domain. This in-equivalence, denoted as spectral filter problem of the RTM technique, gives rise to two imperfections (errors). The first imperfection is unwanted low-frequency (LF) gravity signals, and the second imperfection is missing high-frequency (HF) signals in the forward-modelled RTM gravity signal. This paper presents new solutions to the RTM spectral filter problem. Our solutions are based on explicit modelling of the two imperfections via corrections. The HF correction is computed using spectral domain gravity forward modelling that delivers the HF gravity signal generated by the long-wavelength RTM reference topography. The LF correction is obtained from pre-computed global RTM gravity grids that are low-pass-filtered using surface or solid spherical harmonics. A numerical case study reveals maximum absolute signal strengths of ˜ 44 mGal (0.5 mGal RMS) for the HF correction and ˜ 33 mGal (0.6 mGal RMS) for the LF correction w.r.t. a degree-2160 reference topography within the data coverage of the SRTM topography model (56°S ≤ φ ≤ 60°N). Application of the LF and HF corrections to pre-computed global gravity models (here the GGMplus gravity maps) demonstrates the efficiency of the new corrections over topographically rugged terrain. Over Switzerland, consideration of the HF and LF corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 4.41 to 3.27 mGal, which translates into ˜ 26% improvement. Over a second test area (Canada), our corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 5.65 to 5.30 mGal (˜ 6% improvement). Particularly over Switzerland, geophysical signals (associated, e.g. with valley fillings) were found to stand out more clearly in the RTM-reduced gravity measurements when the HF and LF correction are taken into account. In summary, the new RTM filter corrections can be easily computed and applied to improve the spectral filter characteristics of the popular RTM approach. Benefits are expected, e.g. in the context of the development of future ultra-high-resolution global gravity models, smoothing of observed gravity data in mountainous terrain and geophysical interpretations of RTM-reduced gravity measurements.

  18. Evaluation of multichannel Wiener filters applied to fine resolution passive microwave images of first-year sea ice

    NASA Technical Reports Server (NTRS)

    Full, William E.; Eppler, Duane T.

    1993-01-01

    The effectivity of multichannel Wiener filters to improve images obtained with passive microwave systems was investigated by applying Wiener filters to passive microwave images of first-year sea ice. Four major parameters which define the filter were varied: the lag or pixel offset between the original and the desired scenes, filter length, the number of lines in the filter, and the weight applied to the empirical correlation functions. The effect of each variable on the image quality was assessed by visually comparing the results. It was found that the application of multichannel Wiener theory to passive microwave images of first-year sea ice resulted in visually sharper images with enhanced textural features and less high-frequency noise. However, Wiener filters induced a slight blocky grain to the image and could produce a type of ringing along scan lines traversing sharp intensity contrasts.

  19. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ESASP.729E..65B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ESASP.729E..65B"><span>Comparative Study of Speckle Filtering Methods in PolSAR Radar Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Boutarfa, S.; Bouchemakh, L.; Smara, Y.</p> <p>2015-04-01</p> <p>Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.213.1189H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.213.1189H"><span>Regularized non-stationary morphological reconstruction algorithm for weak signal detection in microseismic monitoring: methodology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Weilin; Wang, Runqiu; Chen, Yangkang</p> <p>2018-05-01</p> <p>Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MeScT..29g5202F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MeScT..29g5202F"><span>Plenoptic particle image velocimetry with multiple plenoptic cameras</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fahringer, Timothy W.; Thurow, Brian S.</p> <p>2018-07-01</p> <p>Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the ‘cigar’-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920029696&hterms=Trigonometry&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DTrigonometry','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920029696&hterms=Trigonometry&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DTrigonometry"><span>Boundary implications for frequency response of interval FIR and IIR filters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bose, N. K.; Kim, K. D.</p> <p>1991-01-01</p> <p>It is shown that vertex implication results in parameter space apply to interval trigonometric polynomials. Subsequently, it is shown that the frequency responses of both interval FIR and IIR filters are bounded by the frequency responses of certain extreme filters. The results apply directly in the evaluation of properties of designed filters, especially because it is more realistic to bound the filter coefficients from above and below instead of determining those with infinite precision because of finite arithmetic effects. Illustrative examples are provided to show how the extreme filters might be easily derived in any specific interval FIR or IIR filter design problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.4820S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.4820S"><span>Insights about data assimilation frameworks for integrating GRACE with hydrological models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schumacher, Maike; Kusche, Jürgen; Van Dijk, Albert I. J. M.; Döll, Petra; Schuh, Wolf-Dieter</p> <p>2016-04-01</p> <p>Improving the understanding of changes in the water cycle represents a challenging objective that requires merging information from various disciplines. Debates exist on selecting an appropriate assimilation technique to integrate GRACE-derived terrestrial water storage changes (TWSC) into hydrological models in order to downscale and disaggregate GRACE TWSC, overcome model limitations, and improve monitoring and forecast skills. Yet, the effect of the specific data assimilation technique in conjunction with ill-conditioning, colored noise, resolution mismatch between GRACE and model, and other complications is still unclear. Due to its simplicity, ensemble Kalman filters or smoothers (EnKF/S) are often applied. In this study, we show that modification of the filter approach might open new avenues to improve the integration process. Particularly, we discuss an improved calibration and data assimilation (C/DA) framework (Schumacher et al., 2016), which is based on the EnKF and was extended by the square root analysis scheme (SQRA) and the singular evolutive interpolated Kalman (SEIK) filter. In addition, we discuss an off-line data blending approach (Van Dijk et al., 2014) that offers the chance to merge multi-model ensembles with GRACE observations. The investigations include: (i) a theoretical comparison, focusing on similarities and differences of the conceptual formulation of the filter algorithms, (ii) a practical comparison, for which the approaches were applied to an ensemble of runs of the WaterGAP Global Hydrology Model (WGHM), as well as (iii) an impact assessment of the GRACE error structure on C/DA results. First, a synthetic experiment over the Mississippi River Basin (USA) was used to gain insights about the C/DA set-up before applying it to real data. The results indicated promising performances when considering alternative methods, e.g. applying the SEIK algorithm improved the correlation coefficient and root mean square error (RMSE) of TWSC by 0.1 and 6 mm, with respect to the EnKF. We successfully transferred our framework to the Murray-Darling Basin (Australia), one of the largest and driest river basins over the world. Finally, we provide recommendations on an optimal C/DA strategy for real GRACE data integrations. Schumacher M, Kusche J, Döll P (2016): A Systematic Impact Assessment of GRACE Error Correlation on Data Assimilation in Hydrological Models. J Geod Van Dijk AIJM, Renzullo LJ, Wada Y, Tregoning P (2014): A global water cycle reanalysis (2003-2012) merging satellite gravimetry and altimetry observations with a hydrological multi-model ensemble. Hydrol Earth Syst Sci</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5017453','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5017453"><span>Hyperspectral Imaging Using Flexible Endoscopy for Laryngeal Cancer Detection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Regeling, Bianca; Thies, Boris; Gerstner, Andreas O. H.; Westermann, Stephan; Müller, Nina A.; Bendix, Jörg; Laffers, Wiebke</p> <p>2016-01-01</p> <p>Hyperspectral imaging (HSI) is increasingly gaining acceptance in the medical field. Up until now, HSI has been used in conjunction with rigid endoscopy to detect cancer in vivo. The logical next step is to pair HSI with flexible endoscopy, since it improves access to hard-to-reach areas. While the flexible endoscope’s fiber optic cables provide the advantage of flexibility, they also introduce an interfering honeycomb-like pattern onto images. Due to the substantial impact this pattern has on locating cancerous tissue, it must be removed before the HS data can be further processed. Thereby, the loss of information is to minimize avoiding the suppression of small-area variations of pixel values. We have developed a system that uses flexible endoscopy to record HS cubes of the larynx and designed a special filtering technique to remove the honeycomb-like pattern with minimal loss of information. We have confirmed its feasibility by comparing it to conventional filtering techniques using an objective metric and by applying unsupervised and supervised classifications to raw and pre-processed HS cubes. Compared to conventional techniques, our method successfully removes the honeycomb-like pattern and considerably improves classification performance, while preserving image details. PMID:27529255</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AJ....154..277K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AJ....154..277K"><span>Estimating Spectra from Photometry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kalmbach, J. Bryce; Connolly, Andrew J.</p> <p>2017-12-01</p> <p>Measuring the physical properties of galaxies such as redshift frequently requires the use of spectral energy distributions (SEDs). SED template sets are, however, often small in number and cover limited portions of photometric color space. Here we present a new method to estimate SEDs as a function of color from a small training set of template SEDs. We first cover the mathematical background behind the technique before demonstrating our ability to reconstruct spectra based upon colors and then compare our results to other common interpolation and extrapolation methods. When the photometric filters and spectra overlap, we show that the error in the estimated spectra is reduced by more than 65% compared to the more commonly used techniques. We also show an expansion of the method to wavelengths beyond the range of the photometric filters. Finally, we demonstrate the usefulness of our technique by generating 50 additional SED templates from an original set of 10 and by applying the new set to photometric redshift estimation. We are able to reduce the photometric redshifts standard deviation by at least 22.0% and the outlier rejected bias by over 86.2% compared to original set for z ≤ 3.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003JOptA...5S.175M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003JOptA...5S.175M"><span>Low spatial frequency characterization of holographic recording materials applied to correlation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Márquez, A.; Neipp, C.; Beléndez, A.; Campos, J.; Pascual, I.; Yzuel, M. J.; Fimia, A.</p> <p>2003-09-01</p> <p>Accurate recording of computer-generated holograms (CGH) on a phase material is not a trivial task. The range of available phase materials is large, and their suitability depends on the fabrication technique chosen to produce the hologram. We are particularly interested in low-cost fabrication techniques, easily available for any lab. In this work we present the results obtained with a wide variety of phase holographic recording materials, characterized at low spatial frequencies (leq32 lp mm-1) which is the range associated with the technique we use to produce the CGHs. We have considered bleached emulsion, silver halide sensitized gelatin (SHSG) and dichromated gelatin. Some interesting differences arise between the behaviour of these materials in the usual holographic range (>1000 lp mm-1), and the low-frequency range intended for digital holography. The ultimate goal of this paper is to establish the suitability of different phase materials as the media to generate correlation filters for optical pattern recognition. In all the materials considered, the phase filters generated ensure the discrimination of the target in the recognition process. Taking into account all the experimental results, we can say that SHSG is the best material to generate phase CGHs with low spatial frequencies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27529255','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27529255"><span>Hyperspectral Imaging Using Flexible Endoscopy for Laryngeal Cancer Detection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Regeling, Bianca; Thies, Boris; Gerstner, Andreas O H; Westermann, Stephan; Müller, Nina A; Bendix, Jörg; Laffers, Wiebke</p> <p>2016-08-13</p> <p>Hyperspectral imaging (HSI) is increasingly gaining acceptance in the medical field. Up until now, HSI has been used in conjunction with rigid endoscopy to detect cancer in vivo. The logical next step is to pair HSI with flexible endoscopy, since it improves access to hard-to-reach areas. While the flexible endoscope's fiber optic cables provide the advantage of flexibility, they also introduce an interfering honeycomb-like pattern onto images. Due to the substantial impact this pattern has on locating cancerous tissue, it must be removed before the HS data can be further processed. Thereby, the loss of information is to minimize avoiding the suppression of small-area variations of pixel values. We have developed a system that uses flexible endoscopy to record HS cubes of the larynx and designed a special filtering technique to remove the honeycomb-like pattern with minimal loss of information. We have confirmed its feasibility by comparing it to conventional filtering techniques using an objective metric and by applying unsupervised and supervised classifications to raw and pre-processed HS cubes. Compared to conventional techniques, our method successfully removes the honeycomb-like pattern and considerably improves classification performance, while preserving image details.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19970016826','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19970016826"><span>Wavelet Analyses of F/A-18 Aeroelastic and Aeroservoelastic Flight Test Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Brenner, Martin J.</p> <p>1997-01-01</p> <p>Time-frequency signal representations combined with subspace identification methods were used to analyze aeroelastic flight data from the F/A-18 Systems Research Aircraft (SRA) and aeroservoelastic data from the F/A-18 High Alpha Research Vehicle (HARV). The F/A-18 SRA data were produced from a wingtip excitation system that generated linear frequency chirps and logarithmic sweeps. HARV data were acquired from digital Schroeder-phased and sinc pulse excitation signals to actuator commands. Nondilated continuous Morlet wavelets implemented as a filter bank were chosen for the time-frequency analysis to eliminate phase distortion as it occurs with sliding window discrete Fourier transform techniques. Wavelet coefficients were filtered to reduce effects of noise and nonlinear distortions identically in all inputs and outputs. Cleaned reconstructed time domain signals were used to compute improved transfer functions. Time and frequency domain subspace identification methods were applied to enhanced reconstructed time domain data and improved transfer functions, respectively. Time domain subspace performed poorly, even with the enhanced data, compared with frequency domain techniques. A frequency domain subspace method is shown to produce better results with the data processed using the Morlet time-frequency technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8359E..17R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8359E..17R"><span>Real-time vehicle noise cancellation techniques for gunshot acoustics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ramos, Antonio L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald</p> <p>2012-06-01</p> <p>Acoustical sniper positioning systems rely on the detection and direction-of-arrival (DOA) estimation of the shockwave and the muzzle blast in order to provide an estimate of a potential snipers location. Field tests have shown that detecting and estimating the DOA of the muzzle blast is a rather difficult task in the presence of background noise sources, e.g., vehicle noise, especially in long range detection and absorbing terrains. In our previous work presented in the 2011 edition of this conference we highlight the importance of improving the SNR of the gunshot signals prior to the detection and recognition stages, aiming at lowering the false alarm and miss-detection rates and, thereby, increasing the reliability of the system. This paper reports on real-time noise cancellation techniques, like Spectral Subtraction and Adaptive Filtering, applied to gunshot signals. Our model assumes the background noise as being short-time stationary and uncorrelated to the impulsive gunshot signals. In practice, relatively long periods without signal occur and can be used to estimate the noise spectrum and its first and second order statistics as required in the spectral subtraction and adaptive filtering techniques, respectively. The results presented in this work are supported with extensive simulations based on real data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20030020773&hterms=heating+global&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dheating%2Bglobal','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20030020773&hterms=heating+global&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dheating%2Bglobal"><span>A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Negri, Andrew J.; Adler, Robert F.</p> <p>2002-01-01</p> <p>The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR-based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JPhCS.738a2130M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JPhCS.738a2130M"><span>Design of order statistics filters using feedforward neural networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maslennikova, Yu. S.; Bochkarev, V. V.</p> <p>2016-08-01</p> <p>In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/2389052','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/2389052"><span>Monorail system for percutaneous repositioning of the Greenfield vena caval filter.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guthaner, D F; Wyatt, J O; Mehigan, J T; Wright, A M; Breen, J F; Wexler, L</p> <p>1990-09-01</p> <p>The authors describe a technique for removing or repositioning a malpositioned Greenfield inferior vena caval filter. A "monorail" system was used, in which a wire was passed from the femoral vein through the apical hole in the filter and out the internal jugular vein; the wire was held taut from above and below and thus facilitated repositioning or removal of the filter. The technique was used successfully in two cases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19720057887&hterms=kalman+filter+delayed+state&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dkalman%2Bfilter%2Bdelayed%2Bstate','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19720057887&hterms=kalman+filter+delayed+state&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dkalman%2Bfilter%2Bdelayed%2Bstate"><span>Recent results of nonlinear estimators applied to hereditary systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schiess, J. R.; Roland, V. R.; Wells, W. R.</p> <p>1972-01-01</p> <p>An application of the extended Kalman filter to delayed systems to estimate the state and time delay is presented. Two nonlinear estimators are discussed and the results compared with those of the Kalman filter. For all the filters considered, the hereditary system was treated with the delay in the pure form and by using Pade approximations of the delay. A summary of the convergence properties of the filters studied is given. The results indicate that the linear filter applied to the delayed system performs inadequately while the nonlinear filters provide reasonable estimates of both the state and the parameters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22679301-th-cd-evaluation-virtual-non-contrast-images-from-novel-split-filter-dual-energy-ct-technique','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22679301-th-cd-evaluation-virtual-non-contrast-images-from-novel-split-filter-dual-energy-ct-technique"><span>TH-CD-202-04: Evaluation of Virtual Non-Contrast Images From a Novel Split-Filter Dual-Energy CT Technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Huang, J; Szczykutowicz, T; Bayouth, J</p> <p></p> <p>Purpose: To compare the ability of two dual-energy CT techniques, a novel split-filter single-source technique of superior temporal resolution against an established sequential-scan technique, to remove iodine contrast from images with minimal impact on CT number accuracy. Methods: A phantom containing 8 tissue substitute materials and vials of varying iodine concentrations (1.7–20.1 mg I /mL) was imaged using a Siemens Edge CT scanner. Dual-energy virtual non-contrast (VNC) images were generated using the novel split-filter technique, in which a 120kVp spectrum is filtered by tin and gold to create high- and low-energy spectra with < 1 second temporal separation between themore » acquisition of low- and high-energy data. Additionally, VNC images were generated with the sequential-scan technique (80 and 140kVp) for comparison. CT number accuracy was evaluated for all materials at 15, 25, and 35mGy CTDIvol. Results: The spectral separation was greater for the sequential-scan technique than the split-filter technique with dual-energy ratios of 2.18 and 1.26, respectively. Both techniques successfully removed iodine contrast, resulting in mean CT numbers within 60HU of 0HU (split-filter) and 40HU of 0HU (sequential-scan) for all iodine concentrations. Additionally, for iodine vials of varying diameter (2–20 mm) with the same concentration (9.9 mg I /mL), the system accurately detected iodine for all sizes investigated. Both dual-energy techniques resulted in reduced CT numbers for bone materials (by >400HU for the densest bone). Increasing the imaging dose did not improve the CT number accuracy for bone in VNC images. Conclusion: VNC images from the split-filter technique successfully removed iodine contrast. These results demonstrate a potential for improving dose calculation accuracy and reducing patient imaging dose, while achieving superior temporal resolution in comparison sequential scans. For both techniques, inaccuracies in CT numbers for bone materials necessitate consideration for radiation therapy treatment planning.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19730042210&hterms=attention+pictures&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dattention%2Bpictures','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19730042210&hterms=attention+pictures&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dattention%2Bpictures"><span>Video-signal improvement using comb filtering techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Arndt, G. D.; Stuber, F. M.; Panneton, R. J.</p> <p>1973-01-01</p> <p>Significant improvement in the signal-to-noise performance of television signals has been obtained through the application of comb filtering techniques. This improvement is achieved by removing the inherent redundancy in the television signal through linear prediction and by utilizing the unique noise-rejection characteristics of the receiver comb filter. Theoretical and experimental results describe the signal-to-noise ratio and picture-quality improvement obtained through the use of baseband comb filters and the implementation of a comb network as the loop filter in a phase-lock-loop demodulator. Attention is given to the fact that noise becomes correlated when processed by the receiver comb filter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1952b0072D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1952b0072D"><span>Denoising and segmentation of retinal layers in optical coherence tomography images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dash, Puspita; Sigappi, A. N.</p> <p>2018-04-01</p> <p>Optical Coherence Tomography (OCT) is an imaging technique used to localize the intra-retinal boundaries for the diagnostics of macular diseases. Due to speckle noise, low image contrast and accurate segmentation of individual retinal layers is difficult. Due to this, a method for retinal layer segmentation from OCT images is presented. This paper proposes a pre-processing filtering approach for denoising and segmentation methods for segmenting retinal layers OCT images using graph based segmentation technique. These techniques are used for segmentation of retinal layers for normal as well as patients with Diabetic Macular Edema. The algorithm based on gradient information and shortest path search is applied to optimize the edge selection. In this paper the four main layers of the retina are segmented namely Internal limiting membrane (ILM), Retinal pigment epithelium (RPE), Inner nuclear layer (INL) and Outer nuclear layer (ONL). The proposed method is applied on a database of OCT images of both ten normal and twenty DME affected patients and the results are found to be promising.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4791633','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4791633"><span>Interface Engineering to Create a Strong Spin Filter Contact to Silicon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Caspers, C.; Gloskovskii, A.; Gorgoi, M.; Besson, C.; Luysberg, M.; Rushchanskii, K. Z.; Ležaić, M.; Fadley, C. S.; Drube, W.; Müller, M.</p> <p>2016-01-01</p> <p>Integrating epitaxial and ferromagnetic Europium Oxide (EuO) directly on silicon is a perfect route to enrich silicon nanotechnology with spin filter functionality. To date, the inherent chemical reactivity between EuO and Si has prevented a heteroepitaxial integration without significant contaminations of the interface with Eu silicides and Si oxides. We present a solution to this long-standing problem by applying two complementary passivation techniques for the reactive EuO/Si interface: (i) an in situ hydrogen-Si (001) passivation and (ii) the application of oxygen-protective Eu monolayers–without using any additional buffer layers. By careful chemical depth profiling of the oxide-semiconductor interface via hard x-ray photoemission spectroscopy, we show how to systematically minimize both Eu silicide and Si oxide formation to the sub-monolayer regime–and how to ultimately interface-engineer chemically clean, heteroepitaxial and ferromagnetic EuO/Si (001) in order to create a strong spin filter contact to silicon. PMID:26975515</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018InvPr..34g5008I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018InvPr..34g5008I"><span>Ensemble-marginalized Kalman filter for linear time-dependent PDEs with noisy boundary conditions: application to heat transfer in building walls</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Iglesias, Marco; Sawlan, Zaid; Scavino, Marco; Tempone, Raúl; Wood, Christopher</p> <p>2018-07-01</p> <p>In this work, we present the ensemble-marginalized Kalman filter (EnMKF), a sequential algorithm analogous to our previously proposed approach (Ruggeri et al 2017 Bayesian Anal. 12 407–33, Iglesias et al 2018 Int. J. Heat Mass Transfer 116 417–31), for estimating the state and parameters of linear parabolic partial differential equations in initial-boundary value problems when the boundary data are noisy. We apply EnMKF to infer the thermal properties of building walls and to estimate the corresponding heat flux from real and synthetic data. Compared with a modified ensemble Kalman filter (EnKF) that is not marginalized, EnMKF reduces the bias error, avoids the collapse of the ensemble without needing to add inflation, and converges to the mean field posterior using or less of the ensemble size required by EnKF. According to our results, the marginalization technique in EnMKF is key to performance improvement with smaller ensembles at any fixed time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016FNL....1550017S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016FNL....1550017S"><span>A New Strategy for ECG Baseline Wander Elimination Using Empirical Mode Decomposition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shahbakhti, Mohammad; Bagheri, Hamed; Shekarchi, Babak; Mohammadi, Somayeh; Naji, Mohsen</p> <p>2016-06-01</p> <p>Electrocardiogram (ECG) signals might be affected by various artifacts and noises that have biological and external sources. Baseline wander (BW) is a low-frequency artifact that may be caused by breathing, body movements and loose sensor contact. In this paper, a novel method based on empirical mode decomposition (EMD) for removal of baseline noise from ECG is presented. When compared to other EMD-based methods, the novelty of this research is to reach the optimized number of decomposed levels for ECG BW de-noising using mean power frequency (MPF), while the reduction of processing time is considered. To evaluate the performance of the proposed method, a fifth-order Butterworth high pass filtering (BHPF) with cut-off frequency at 0.5Hz and wavelet approach are applied. Three performance indices, signal-to-noise ratio (SNR), mean square error (MSE) and correlation coefficient (CC), between pure and filtered signals have been utilized for qualification of presented techniques. Results suggest that the EMD-based method outperforms the other filtering method.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5497477','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5497477"><span>Absorption/Transmission Measurements of PSAP Particle-Laden Filters from the Biomass Burning Observation Project (BBOP) Field Campaign</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Presser, Cary; Nazarian, Ashot; Conny, Joseph M.; Chand, Duli; Sedlacek, Arthur; Hubbe, John M.</p> <p>2017-01-01</p> <p>Absorptivity measurements with a laser-heating approach, referred to as the laser-driven thermal reactor (LDTR), were carried out in the infrared and applied at ambient (laboratory) non-reacting conditions to particle-laden filters from a three-wavelength (visible) particle/soot absorption photometer (PSAP). The particles were obtained during the Biomass Burning Observation Project (BBOP) field campaign. The focus of this study was to determine the particle absorption coefficient from field-campaign filter samples using the LDTR approach, and compare results with other commercially available instrumentation (in this case with the PSAP, which has been compared with numerous other optical techniques). Advantages of the LDTR approach include 1) direct estimation of material absorption from temperature measurements (as opposed to resolving the difference between the measured reflection/scattering and transmission), 2) information on the filter optical properties, and 3) identification of the filter material effects on particle absorption (e.g., leading to particle absorption enhancement or shadowing). For measurements carried out under ambient conditions, the particle absorptivity is obtained with a thermocouple placed flush with the filter back surface and the laser probe beam impinging normal to the filter particle-laden surface. Thus, in principle one can employ a simple experimental arrangement to measure simultaneously both the transmissivity and absorptivity (at different discrete wavelengths) and ascertain the particle absorption coefficient. For this investigation, LDTR measurements were carried out with PSAP filters (pairs with both blank and exposed filters) from eight different days during the campaign, having relatively light but different particle loadings. The observed particles coating the filters were found to be carbonaceous (having broadband absorption characteristics). The LDTR absorption coefficient compared well with results from the PSAP. The analysis was also expanded to account for the filter fiber scattering on particle absorption in assessing particle absorption enhancement and shadowing effects. The results indicated that absorption enhancement effects were significant, and diminished with increased filter particle loading. PMID:28690360</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28690360','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28690360"><span>Absorption/Transmission Measurements of PSAP Particle-Laden Filters from the Biomass Burning Observation Project (BBOP) Field Campaign.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Presser, Cary; Nazarian, Ashot; Conny, Joseph M; Chand, Duli; Sedlacek, Arthur; Hubbe, John M</p> <p>2017-01-01</p> <p>Absorptivity measurements with a laser-heating approach, referred to as the laser-driven thermal reactor (LDTR), were carried out in the infrared and applied at ambient (laboratory) non-reacting conditions to particle-laden filters from a three-wavelength (visible) particle/soot absorption photometer (PSAP). The particles were obtained during the Biomass Burning Observation Project (BBOP) field campaign. The focus of this study was to determine the particle absorption coefficient from field-campaign filter samples using the LDTR approach, and compare results with other commercially available instrumentation (in this case with the PSAP, which has been compared with numerous other optical techniques). Advantages of the LDTR approach include 1) direct estimation of material absorption from temperature measurements (as opposed to resolving the difference between the measured reflection/scattering and transmission), 2) information on the filter optical properties, and 3) identification of the filter material effects on particle absorption (e.g., leading to particle absorption enhancement or shadowing). For measurements carried out under ambient conditions, the particle absorptivity is obtained with a thermocouple placed flush with the filter back surface and the laser probe beam impinging normal to the filter particle-laden surface. Thus, in principle one can employ a simple experimental arrangement to measure simultaneously both the transmissivity and absorptivity (at different discrete wavelengths) and ascertain the particle absorption coefficient. For this investigation, LDTR measurements were carried out with PSAP filters (pairs with both blank and exposed filters) from eight different days during the campaign, having relatively light but different particle loadings. The observed particles coating the filters were found to be carbonaceous (having broadband absorption characteristics). The LDTR absorption coefficient compared well with results from the PSAP. The analysis was also expanded to account for the filter fiber scattering on particle absorption in assessing particle absorption enhancement and shadowing effects. The results indicated that absorption enhancement effects were significant, and diminished with increased filter particle loading.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.lung.org/','NIH-MEDLINEPLUS'); return false;" href="http://www.lung.org/"><span>American Lung Association</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://medlineplus.gov/">MedlinePlus</a></p> <p></p> <p></p> <p>... this.$content) { return this.$content; } var self = this, filters = this.constructor.contentFilters, readTargetAttr = function(name){ return self.$ ... self.targetAttr), data = self.target || targetValue || ''; /* Find which filter applies */ var filter = filters[self.type]; /* check explicit ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19780028919&hterms=sonar&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dsonar','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19780028919&hterms=sonar&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dsonar"><span>Computer image processing in marine resource exploration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.</p> <p>1976-01-01</p> <p>Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..328a2011S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..328a2011S"><span>Wear Detection of Drill Bit by Image-based Technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sukeri, Maziyah; Zulhilmi Paiz Ismadi, Mohd; Rahim Othman, Abdul; Kamaruddin, Shahrul</p> <p>2018-03-01</p> <p>Image processing for computer vision function plays an essential aspect in the manufacturing industries for the tool condition monitoring. This study proposes a dependable direct measurement method to measure the tool wear using image-based analysis. Segmentation and thresholding technique were used as the means to filter and convert the colour image to binary datasets. Then, the edge detection method was applied to characterize the edge of the drill bit. By using cross-correlation method, the edges of original and worn drill bits were correlated to each other. Cross-correlation graphs were able to detect the difference of the worn edge despite small difference between the graphs. Future development will focus on quantifying the worn profile as well as enhancing the sensitivity of the technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080008696','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080008696"><span>Tunable resonator-based devices for producing variable delays and narrow spectral linewidths</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Savchenkov, Anatoliy (Inventor); Maleki, Lutfollah (Inventor); Matsko, Andrey B. (Inventor); Ilchenko, Vladimir (Inventor)</p> <p>2006-01-01</p> <p>Devices with two or more coupled resonators to produce narrow spectral responses due to interference of signals that transmit through the resonators and techniques for operating such devices to achieve certain operating characteristics are described. The devices may be optical devices where optical resonators such as whispering gallery mode resonators may be used. In one implementation, at least one of the coupled optical resonators is a tunable resonator and is tuned to change its resonance frequency to tune the spectral response of the device. The described devices and techniques may be applied in optical filters, optical delays, optical waveform generators, and other applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6060E..0QB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6060E..0QB"><span>Focus-based filtering + clustering technique for power-law networks with small world phenomenon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Boutin, François; Thièvre, Jérôme; Hascoët, Mountaz</p> <p>2006-01-01</p> <p>Realistic interaction networks usually present two main properties: a power-law degree distribution and a small world behavior. Few nodes are linked to many nodes and adjacent nodes are likely to share common neighbors. Moreover, graph structure usually presents a dense core that is difficult to explore with classical filtering and clustering techniques. In this paper, we propose a new filtering technique accounting for a user-focus. This technique extracts a tree-like graph with also power-law degree distribution and small world behavior. Resulting structure is easily drawn with classical force-directed drawing algorithms. It is also quickly clustered and displayed into a multi-level silhouette tree (MuSi-Tree) from any user-focus. We built a new graph filtering + clustering + drawing API and report a case study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AtmEn..38.3373L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AtmEn..38.3373L"><span>Multiwavelength absorbance of filter deposits for determination of environmental tobacco smoke and black carbon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lawless, Phil A.; Rodes, Charles E.; Ensor, David S.</p> <p></p> <p>A multiwavelength optical absorption technique has been developed for Teflon filters used for personal exposure sampling with sufficient sensitivity to allow apportionments of environmental tobacco smoke and soot (black) carbon to be made. Measurements on blank filters show that the filter material itself contributes relatively little to the total absorbance and filters from the same lot have similar characteristics; this makes retrospective analysis of filters quite feasible. Using an integrating sphere radiometer and multiple wavelengths to provide specificity, the determination of tobacco smoke and carbon with reasonable accuracy is possible on filters not characterized before exposure. This technique provides a low cost, non-destructive exposure assessment alternative to both standard thermo-gravimetric elemental carbon evaluations on quartz filters and cotinine analyses from urine or saliva samples. The method allows the same sample filter to be used for assessment of mass, carbon, and tobacco smoke without affecting the deposit.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29063091','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29063091"><span>A baseline drift detrending technique for fast scan cyclic voltammetry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>DeWaele, Mark; Oh, Yoonbae; Park, Cheonho; Kang, Yu Min; Shin, Hojin; Blaha, Charles D; Bennet, Kevin E; Kim, In Young; Lee, Kendall H; Jang, Dong Pyo</p> <p>2017-11-06</p> <p>Fast scan cyclic voltammetry (FSCV) has been commonly used to measure extracellular neurotransmitter concentrations in the brain. Due to the unstable nature of the background currents inherent in FSCV measurements, analysis of FSCV data is limited to very short amounts of time using traditional background subtraction. In this paper, we propose the use of a zero-phase high pass filter (HPF) as the means to remove the background drift. Instead of the traditional method of low pass filtering across voltammograms to increase the signal to noise ratio, a HPF with a low cutoff frequency was applied to the temporal dataset at each voltage point to remove the background drift. As a result, the HPF utilizing cutoff frequencies between 0.001 Hz and 0.01 Hz could be effectively used to a set of FSCV data for removing the drifting patterns while preserving the temporal kinetics of the phasic dopamine response recorded in vivo. In addition, compared to a drift removal method using principal component analysis, this was found to be significantly more effective in reducing the drift (unpaired t-test p < 0.0001, t = 10.88) when applied to data collected from Tris buffer over 24 hours although a drift removal method using principal component analysis also showed the effective background drift reduction. The HPF was also applied to 5 hours of FSCV in vivo data. Electrically evoked dopamine peaks, observed in the nucleus accumbens, were clearly visible even without background subtraction. This technique provides a new, simple, and yet robust, approach to analyse FSCV data with an unstable background.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1993SPIE.2094.1300W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1993SPIE.2094.1300W"><span>Wavelets for sign language translation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wilson, Beth J.; Anspach, Gretel</p> <p>1993-10-01</p> <p>Wavelet techniques are applied to help extract the relevant parameters of sign language from video images of a person communicating in American Sign Language or Signed English. The compression and edge detection features of two-dimensional wavelet analysis are exploited to enhance the algorithms under development to classify the hand motion, hand location with respect to the body, and handshape. These three parameters have different processing requirements and complexity issues. The results are described for applying various quadrature mirror filter designs to a filterbank implementation of the desired wavelet transform. The overall project is to develop a system that will translate sign language to English to facilitate communication between deaf and hearing people.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28332375','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28332375"><span>Quarter-Shifted Microincisional Sutureless Vitrectomy in Patients with a Glaucoma Drainage Implant or Filtering Bleb.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Song, Ji Hun; Jang, Seran; Cho, Eun Hyung; Ahn, Jaehong</p> <p>2017-05-01</p> <p>When vitrectomy is performed in eyes that have undergone glaucoma surgery, the site of sclerotomy often overlaps with the previous glaucoma operation site. It can lead to serious complications such as postoperative hypotony, leakage, and/or infection. Our technique involves modification of surgeon's position and two sclerotomy sites 45° away from the original position, with an infusion cannula inserted infranasally to avoid damage to the glaucoma drainage implant or filtering bleb. The modified approach was applied to seven eyes with various indications. Vitrectomy was successfully completed, and there were no sclerotomy site complications, leakage, or hypotony in any case. Good intraocular pressure control was maintained throughout the postoperative course in all cases. © Copyright: Yonsei University College of Medicine 2017.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9914E..21C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9914E..21C"><span>Strategies on solar observation of Atacama Large Millimeter/submillimeter Array (ALMA) band-1 receiver</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chiong, Chau-Ching; Chiang, Po-Han; Hwang, Yuh-Jing; Huang, Yau-De</p> <p>2016-07-01</p> <p>ALMA covering 35-950 GHz is the largest existing telescope array in the world. Among the 10 receiver bands, Band-1, which covers 35-50 GHz, is the lowest. Due to its small dimension and its time-variant frequency-dependent gain characteristics, current solar filter located above the cryostat cannot be applied to Band-1 for solar observation. Here we thus adopt new strategies to fulfill the goals. Thanks to the flexible dc biasing scheme of the HEMT-based amplifier in Band-1 front-end, bias adjustment of the cryogenic low noise amplifier is investigated to accomplish solar observation without using solar filter. Large power handling range can be achieved by the de-tuning bias technique with little degradation in system performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE10174E..0GP','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE10174E..0GP"><span>Metal-polymer nanocomposites for stretchable optics and plasmonics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Potenza, Marco A. C.; Minnai, Chloé; Milani, Paolo</p> <p>2016-12-01</p> <p>Stretchable and conformable optical devices open very exciting perspectives for the fabrication of systems incorporating diffracting and optical power in a single element and of tunable plasmonic filters and absorbers. The use of nanocomposites obtained by inserting metallic nanoparticles produced in the gas phase into polymeric matrices allows to effectively fabricate cheap and simple stretchable optical elements able to withstand thousands of deformations and stretching cycles without any degradation of their optical properties. The nanocomposite-based reflective optical devices show excellent performances and stability compared to similar devices fabricated with standard techniques. The nanocomposite-based devices can be therefore applied to arbitrary curved non-optical grade surfaces in order to achieve optical power and to minimize aberrations like astigmatism. Examples discussed here include stretchable reflecting gratings, plasmonic filters tunable by mechanical stretching and light absorbers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28039890','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28039890"><span>Simultaneous separation and determination of 15 organic UV filters in sunscreen cosmetics by HPLC-ESI-MS/MS.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Meng, X; Ma, Q; Bai, H; Wang, Z; Han, C; Wang, C</p> <p>2017-08-01</p> <p>A comprehensive methodology for the simultaneous determination of 15 multiclass organic UV filters in sunscreen cosmetics was developed using high-performance liquid chromatography coupled with electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). Sunscreen cosmetics of various matrices, such as toning lotion, emulsion, cream and lipstick, were analysed. Ultrasound-assisted extraction (UAE) was utilized as the extraction technique for sample preparation. The 15 UV filters were chromatographically separated by two groups of mobile phase system on an XBridge C 18 analytical column (150 × 2.1 mm I.D., 3.5 μm particle size) and quantified using HPLC-ESI-MS/MS. The quantitation was performed using the external calibration method. The established method was validated in terms of linearity, sensitivity, specificity, accuracy, stability, intraday and interday precisions, recovery and matrix effect. The method was also applied for the determination of UV filters in commercial sunscreen cosmetics. The experimental results demonstrated that the developed method was accurate, rapid and sensitive and can be used for the analytical control of sunscreen cosmetics. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9413E..26N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9413E..26N"><span>Intensity transform and Wiener filter in measurement of blood flow in arteriography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nunes, Polyana F.; Franco, Marcelo L. N.; Filho, João. B. D.; Patrocínio, Ana C.</p> <p>2015-03-01</p> <p>Using the arteriography examination, it is possible to check anomalies in blood vessels and diseases such as stroke, stenosis, bleeding and especially in the diagnosis of Encephalic Death in comatose individuals. Encephalic death can be diagnosed only when there is complete interruption of all brain functions, and hence the blood stream. During the examination, there may be some interference on the sensors, such as environmental factors, poor maintenance of equipment, patient movement, among other interference, which can directly affect the noise produced in angiography images. Then, we need to use digital image processing techniques to minimize this noise and improve the pixel count. Therefore, this paper proposes to use median filter and enhancement techniques for transformation of intensity using the sigmoid function together with the Wiener filter so you can get less noisy images. It's been realized two filtering techniques to remove the noise of images, one with the median filter and the other with the Wiener filter along the sigmoid function. For 14 tests quantified, including 7 Encephalic Death and 7 other cases, the technique that achieved a most satisfactory number of pixels quantified, also presenting a lesser amount of noise, is the Wiener filter sigmoid function, and in this case used with 0.03 cuttof.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA619486','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA619486"><span>Applying Cooperative Localization to Swarm UAVS Using an Extended Kalman Filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2014-09-01</p> <p>computational power, communications bandwidth, and payload space . Therefore, techniques used to improve positional accuracy must account for these...Chapter 3, an implementation of CL using an EKF in a two-dimensional (2-D) space of motion is presented, and further modified to allow for CL in UAVs...known, two points are possible of which one will be far out in space and can be eliminated. Thus, the point on the surface has been determined. Over the</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26795617','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26795617"><span>IVC filter retrieval in adolescents: experience in a tertiary pediatric center.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guzman, Anthony K; Zahra, Mahmoud; Trerotola, Scott O; Raffini, Leslie J; Itkin, Maxim; Keller, Marc S; Cahill, Anne Marie</p> <p>2016-04-01</p> <p>Inferior vena cava (IVC) filters are commonly implanted with the intent to prevent life-threatening pulmonary embolism in at-risk patients with contraindications to anticoagulation. Various studies have reported increases in the rate of venous thromboembolism within the pediatric population. The utility and safety of IVC filters in children has not yet been fully defined. To describe the technique and adjunctive maneuvers of IVC filter removal in children, demonstrate its technical success and identify complications. A retrospective 10-year review was performed of 20 children (13 male, 7 female), mean age: 15.1 years (range: 12-19 years), who underwent IVC filter retrieval. Eleven of 20 (55%) were placed in our institution. Electronic medical records were reviewed for filter characteristics, retrieval technique, technical success and complications. The technical success rate was 100%. Placement indications included: deep venous thrombosis with a contraindication to anticoagulation (10/20, 50%), free-floating thrombus (4/20, 20%), post-trauma pulmonary embolism prophylaxis (3/20, 15%) and pre-thrombolysis pulmonary patient (1/20, 5%). The mean implantation period was 63 days (range: 20-270 days). Standard retrieval was performed in 17/20 patients (85%). Adjunctive techniques were performed in 3/20 patients (15%) and included the double-snare technique, balloon assistance and endobronchial forceps retrieval. Median procedure time was 60 min (range: 45-240 min). Pre-retrieval cavogram demonstrated filter tilt in 5/20 patients (25%) with a mean angle of 17° (range: 8-40). Pre-retrieval CT demonstrated strut wall penetration and tip embedment in one patient each. There were two procedure-related complications: IVC mural dissection noted on venography in one patient and snare catheter fracture requiring retrieval in one patient. There were no early or late complications. In children, IVC filter retrieval can be performed safely but may be challenging, especially in cases of filter tilt or embedding. Adjunctive techniques may increase filter retrieval rates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MeScT..28e5202R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MeScT..28e5202R"><span>Velocity interferometer signal de-noising using modified Wiener filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rav, Amit; Joshi, K. D.; Roy, Kallol; Kaushik, T. C.</p> <p>2017-05-01</p> <p>The accuracy and precision of the non-contact velocity interferometer system for any reflector (VISAR) depends not only on the good optical design and linear optical-to- electrical conversion system, but also on accurate and robust post-processing techniques. The performance of these techniques, such as the phase unwrapping algorithm, depends on the signal-to-noise ratio (SNR) of the recorded signal. In the present work, a novel method of improving the SNR of the recorded VISAR signal, based on the knowledge of the noise characteristic of the signal conversion and recording system, is presented. The proposed method uses a modified Wiener filter, for which the signal power spectrum estimation is obtained using a spectral subtraction method (SSM), and the noise power spectrum estimation is obtained by taking the average of the recorded signal during the period when no target movement is expected. Since the noise power spectrum estimate is dynamic in nature, and obtained for each experimental record individually, the improved signal quality is high. The proposed method is applied to the simulated standard signals, and is not only found to be better than the SSM, but is also less sensitive to the selection of the noise floor during signal power spectrum estimation. Finally, the proposed method is applied to the recorded experimental signal and an improvement in the SNR is reported.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19746797','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19746797"><span>Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Saito, Masatoshi</p> <p>2009-08-01</p> <p>Dual-energy computed tomography (DECT) has the potential for measuring electron density distribution in a human body to predict the range of particle beams for treatment planning in proton or heavy-ion radiotherapy. However, thus far, a practical dual-energy method that can be used to precisely determine electron density for treatment planning in particle radiotherapy has not been developed. In this article, another DECT technique involving a balanced filter method using a conventional x-ray tube is described. For the spectral optimization of DECT using balanced filters, the author calculates beam-hardening error and air kerma required to achieve a desired noise level in electron density and effective atomic number images of a cylindrical water phantom with 50 cm diameter. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimized parameters were applied to cases with different phantom diameters ranging from 5 to 50 cm for the calculations. The author predicts that the optimal combination of tube voltages would be 80 and 140 kV with Tb/Hf and Bi/Mo filter pairs for the 50-cm-diameter water phantom. When a single phantom calibration at a diameter of 25 cm was employed to cover all phantom sizes, maximum absolute beam-hardening errors were 0.3% and 0.03% for electron density and effective atomic number, respectively, over a range of diameters of the water phantom. The beam-hardening errors were 1/10 or less as compared to those obtained by conventional DECT, although the dose was twice that of the conventional DECT case. From the viewpoint of beam hardening and the tube-loading efficiency, the present DECT using balanced filters would be significantly more effective in measuring the electron density than the conventional DECT. Nevertheless, further developments of low-exposure imaging technology should be necessary as well as x-ray tubes with higher outputs to apply DECT coupled with the balanced filter method for clinical use.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28254082','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28254082"><span>ECG artifact cancellation in surface EMG signals by fractional order calculus application.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Miljković, Nadica; Popović, Nenad; Djordjević, Olivera; Konstantinović, Ljubica; Šekara, Tomislav B</p> <p>2017-03-01</p> <p>New aspects for automatic electrocardiography artifact removal from surface electromyography signals by application of fractional order calculus in combination with linear and nonlinear moving window filters are explored. Surface electromyography recordings of skeletal trunk muscles are commonly contaminated with spike shaped artifacts. This artifact originates from electrical heart activity, recorded by electrocardiography, commonly present in the surface electromyography signals recorded in heart proximity. For appropriate assessment of neuromuscular changes by means of surface electromyography, application of a proper filtering technique of electrocardiography artifact is crucial. A novel method for automatic artifact cancellation in surface electromyography signals by applying fractional order calculus and nonlinear median filter is introduced. The proposed method is compared with the linear moving average filter, with and without prior application of fractional order calculus. 3D graphs for assessment of window lengths of the filters, crest factors, root mean square differences, and fractional calculus orders (called WFC and WRC graphs) have been introduced. For an appropriate quantitative filtering evaluation, the synthetic electrocardiography signal and analogous semi-synthetic dataset have been generated. The examples of noise removal in 10 able-bodied subjects and in one patient with muscle dystrophy are presented for qualitative analysis. The crest factors, correlation coefficients, and root mean square differences of the recorded and semi-synthetic electromyography datasets showed that the most successful method was the median filter in combination with fractional order calculus of the order 0.9. Statistically more significant (p < 0.001) ECG peak reduction was obtained by the median filter application compared to the moving average filter in the cases of low level amplitude of muscle contraction compared to ECG spikes. The presented results suggest that the novel method combining a median filter and fractional order calculus can be used for automatic filtering of electrocardiography artifacts in the surface electromyography signal envelopes recorded in trunk muscles. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19687563','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19687563"><span>A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Suzuki, Kenji</p> <p>2009-09-21</p> <p>Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGE....14..920C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGE....14..920C"><span>Unscented Kalman filter assimilation of time-lapse self-potential data for monitoring solute transport</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cui, Yi-an; Liu, Lanbo; Zhu, Xiaoxiong</p> <p>2017-08-01</p> <p>Monitoring the extent and evolution of contaminant plumes in local and regional groundwater systems from existing landfills is critical in contamination control and remediation. The self-potential survey is an efficient and economical nondestructive geophysical technique that can be used to investigate underground contaminant plumes. Based on the unscented transform, we have built a Kalman filtering cycle to conduct time-lapse data assimilation for monitoring the transport of solute based on the solute transport experiment using a bench-scale physical model. The data assimilation was formed by modeling the evolution based on the random walk model and observation correcting based on the self-potential forward. Thus, monitoring self-potential data can be inverted by the data assimilation technique. As a result, we can reconstruct the dynamic process of the contaminant plume instead of using traditional frame-to-frame static inversion, which may cause inversion artifacts. The data assimilation inversion algorithm was evaluated through noise-added synthetic time-lapse self-potential data. The result of the numerical experiment shows validity, accuracy and tolerance to the noise of the dynamic inversion. To validate the proposed algorithm, we conducted a scaled-down sandbox self-potential observation experiment to generate time-lapse data that closely mimics the real-world contaminant monitoring setup. The results of physical experiments support the idea that the data assimilation method is a potentially useful approach for characterizing the transport of contamination plumes using the unscented Kalman filter (UKF) data assimilation technique applied to field time-lapse self-potential data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910033293&hterms=Lower+class&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DLower%2Bclass','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910033293&hterms=Lower+class&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DLower%2Bclass"><span>A class of systolizable IIR digital filters and its design for proper scaling and minimum output roundoff noise</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lei, Shaw-Min; Yao, Kung</p> <p>1990-01-01</p> <p>A class of infinite impulse response (IIR) digital filters with a systolizable structure is proposed and its synthesis is investigated. The systolizable structure consists of pipelineable regular modules with local connections and is suitable for VLSI implementation. It is capable of achieving high performance as well as high throughput. This class of filter structure provides certain degrees of freedom that can be used to obtain some desirable properties for the filter. Techniques of evaluating the internal signal powers and the output roundoff noise of the proposed filter structure are developed. Based upon these techniques, a well-scaled IIR digital filter with minimum output roundoff noise is designed using a local optimization approach. The internal signals of all the modes of this filter are scaled to unity in the l2-norm sense. Compared to the Rao-Kailath (1984) orthogonal digital filter and the Gray-Markel (1973) normalized-lattice digital filter, this filter has better scaling properties and lower output roundoff noise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.epa.gov/air-emissions-monitoring-knowledge-base/monitoring-control-technique-fabric-filters','PESTICIDES'); return false;" href="https://www.epa.gov/air-emissions-monitoring-knowledge-base/monitoring-control-technique-fabric-filters"><span>Monitoring by Control Technique - Fabric Filters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Stationary source emissions monitoring is required to demonstrate that a source is meeting the requirements in Federal or state rules. This page is about fabric filter control techniques used to reduce pollutant emissions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3792902','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3792902"><span>Remote Measurements of Heart and Respiration Rates for Telemedicine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Qian, Yi; Tsien, Joe Z.</p> <p>2013-01-01</p> <p>Non-contact and low-cost measurements of heart and respiration rates are highly desirable for telemedicine. Here, we describe a novel technique to extract blood volume pulse and respiratory wave from a single channel images captured by a video camera for both day and night conditions. The principle of our technique is to uncover the temporal dynamics of heart beat and breathing rate through delay-coordinate transformation and independent component analysis-based deconstruction of the single channel images. Our method further achieves robust elimination of false positives via applying ratio-variation probability distributions filtering approaches. Moreover, it enables a much needed low-cost means for preventing sudden infant death syndrome in new born infants and detecting stroke and heart attack in elderly population in home environments. This noncontact-based method can also be applied to a variety of animal model organisms for biomedical research. PMID:24115996</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060048549','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060048549"><span>Relative Attitude Determination of Earth Orbiting Formations Using GPS Receivers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lightsey, E. Glenn</p> <p>2004-01-01</p> <p>Satellite formation missions require the precise determination of both the position and attitude of multiple vehicles to achieve the desired objectives. In order to support the mission requirements for these applications, it is necessary to develop techniques for representing and controlling the attitude of formations of vehicles. A generalized method for representing the attitude of a formation of vehicles has been developed. The representation may be applied to both absolute and relative formation attitude control problems. The technique is able to accommodate formations of arbitrarily large number of vehicles. To demonstrate the formation attitude problem, the method is applied to the attitude determination of a simple leader-follower along-track orbit formation. A multiplicative extended Kalman filter is employed to estimate vehicle attitude. In a simulation study using GPS receivers as the attitude sensors, the relative attitude between vehicles in the formation is determined 3 times more accurately than the absolute attitude.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1011580','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1011580"><span>Statistical, Graphical, and Learning Methods for Sensing, Surveillance, and Navigation Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2016-06-28</p> <p>harsh propagation environments. Conventional filtering techniques fail to provide satisfactory performance in many important nonlinear or non...Gaussian scenarios. In addition, there is a lack of a unified methodology for the design and analysis of different filtering techniques. To address...these problems, we have proposed a new filtering methodology called belief condensation (BC) DISTRIBUTION A: Distribution approved for public release</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6391474-fundamentals-digital-filtering-applications-geophysical-prospecting-oil','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6391474-fundamentals-digital-filtering-applications-geophysical-prospecting-oil"><span>Fundamentals of digital filtering with applications in geophysical prospecting for oil</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Mesko, A.</p> <p></p> <p>This book is a comprehensive work bringing together the important mathematical foundations and computing techniques for numerical filtering methods. The first two parts of the book introduce the techniques, fundamental theory and applications, while the third part treats specific applications in geophysical prospecting. Discussion is limited to linear filters, but takes in related fields such as correlational and spectral analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA129876','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA129876"><span>Modern Display Technologies for Airborne Applications.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1983-04-01</p> <p>the case of LED head-down direct view displays, this requires that special attention be paid to the optical filtering , the electrical drive/address...effectively attenuates the LED specular reflectance component, the colour and neutral density filtering attentuate the diffuse component and the... filter techniques are planned for use with video, multi- colour and advanced versions of numeric, alphanumeric and graphic displays; this technique</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1159951','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1159951"><span>Methods and apparatuses using filter banks for multi-carrier spread-spectrum signals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A</p> <p>2014-10-14</p> <p>A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to the synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1131886','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1131886"><span>Methods and apparatuses using filter banks for multi-carrier spread-spectrum signals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A</p> <p>2014-05-20</p> <p>A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to the synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.epa.gov/air-emissions-monitoring-knowledge-base/monitoring-control-technique-electrified-filter-bed','PESTICIDES'); return false;" href="https://www.epa.gov/air-emissions-monitoring-knowledge-base/monitoring-control-technique-electrified-filter-bed"><span>Monitoring by Control Technique - Electrified Filter Bed</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Stationary source emissions monitoring is required to demonstrate that a source is meeting the requirements in Federal or state rules. This page is about electrified filter bed control techniques used to reduce pollutant emissions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20165092','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20165092"><span>Real-time optical signal processors employing optical feedback: amplitude and phase control.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gallagher, N C</p> <p>1976-04-01</p> <p>The development of real-time coherent optical signal processors has increased the appeal of optical computing techniques in signal processing applications. A major limitation of these real-time systems is the. fact that the optical processing material is generally of a phase-only type. The result is that the spatial filters synthesized with these systems must be either phase-only filters or amplitude-only filters. The main concern of this paper is the application of optical feedback techniques to obtain simultaneous and independent amplitude and phase control of the light passing through the system. It is shown that optical feedback techniques may be employed with phase-only spatial filters to obtain this amplitude and phase control. The feedback system with phase-only filters is compared with other feedback systems that employ combinations of phase-only and amplitude-only filters; it is found that the phase-only system is substantially more flexible than the other two systems investigated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvD..97d4039G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvD..97d4039G"><span>Deep neural networks to enable real-time multimessenger astrophysics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>George, Daniel; Huerta, E. A.</p> <p>2018-02-01</p> <p>Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22634785-su-method-improve-spatial-resolution-prompt-gamma-based-compton-imaging-proton-range-verification','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22634785-su-method-improve-spatial-resolution-prompt-gamma-based-compton-imaging-proton-range-verification"><span>SU-F-J-189: A Method to Improve the Spatial Resolution of Prompt Gamma Based Compton Imaging for Proton Range Verification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Draeger, E; Chen, H; Polf, J</p> <p></p> <p>Purpose: To test two new techniques, the distance-of-closest approach (DCA) and Compton line (CL) filters, developed as a means of improving the spatial resolution of Compton camera (CC) imaging. Methods: Gammas emitted from {sup 22}Na, {sup 137}Cs, and {sup 60}Co point sources were measured with a prototype 3-stage CC. The energy deposited and position of each interaction in each stage were recorded and used to calculate a “cone-of-origin” for each gamma that scattered twice in the CC. A DCA filter was developed which finds the shortest distance from the gamma’s cone-of-origin surface to the location of the gamma source. Themore » DCA filter was applied to the data to determine the initial energy of the gamma and to remove “bad” interactions that only contribute noise to the image. Additionally, a CL filter, which removes gamma events that do not follow the theoretical predictions of the Compton scatter equation, was used to further remove “bad” interactions from the measured data. Then images were reconstructed with raw, unfiltered data, DCA filtered data, and DCA+CL filtered data and the achievable image resolution of each dataset was compared. Results: Spatial resolutions of ∼2 mm, and better than 2 mm, were achievable with the DCA and DCA+CL filtered data, respectively, compared to > 5 mm for the raw, unfiltered data. Conclusion: In many special cases in medical imaging where information about the source position may be known, such as proton radiotherapy range verification, the application of the DCA and CL filters can result in considerable improvements in the achievable spatial resolutions of Compton imaging.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013apra.prop..117S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013apra.prop..117S"><span>Quasi-Optical Filter Development and Characterization for Far-IR Astronomical Applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stewart, Kenneth</p> <p></p> <p>Mid-infrared through microwave filters, beamsplitters, and polarizers are a crucial supporting technology for NASA’s space astronomy, astrophysics, and earth science programs. Building upon our successful production of mid-infrared, far-infrared, millimeter, and microwave bandpass and lowpass filters, we propose to investigate aspects of their optical performance that are still not well understood and have yet to be addressed by other researchers. Specifically, we wish to understand and mitigate unexplained high-frequency leaks found to degrade or invalidate spectroscopic data from flight instruments such as Herschel/PACS, SHARC II, GISMO, and ACT, but not predicted by numerical simulations. A complete understanding will improve accuracy and sensitivity, and will enable the mass and volume of cryogenic baffling to be appropriately matched to the physically achievable quasioptical filter response, thereby reducing the cost of future far-infrared missions. The development and experimental validation of this modeling capability will enable optimization of system performance as well as reduce risks to the schedule and end science products for all future space and suborbital missions that use quasioptical filters. The outcome of this work will be critical in achieving the exacting background-limited bolometric detector performance specifications of future far-infrared and submillimeter space instruments. This program will allow us to apply our unique in-house numerical simulation software and develop enhanced layer alignment, filter fabrication, and testing techniques for the first time to address these issues: (1) enhance filter performance, (2) simplify the optical architecture of future instruments by improving our understanding of high-frequency leaks, and (3) produce filters which minimize or eliminate these important effects. With our state-ofthe-art modeling, fabrication, and testing facilities and expertise, established in previous projects, we are uniquely positioned to tackle this development.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70043513','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70043513"><span>Time-lapse analysis of methane quantity in Mary Lee group of coal seams using filter-based multiple-point geostatistical simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Karacan, C. Özgen; Olea, Ricardo A.</p> <p>2013-01-01</p> <p>The systematic approach presented in this paper is the first time in literature that history matching, TIs of GIPs and filter simulations are used for degasification performance evaluation and for assessing GIP for mining safety. Results from this study showed that using production history matching of coalbed methane wells to determine time-lapsed reservoir data could be used to compute spatial GIP and representative GIP TIs generated through Voronoi decomposition. Furthermore, performing filter simulations using point-wise data and TIs could be used to predict methane quantity in coal seams subjected to degasification. During the course of the study, it was shown that the material balance of gas produced by wellbores and the GIP reductions in coal seams predicted using filter simulations compared very well, showing the success of filter simulations for continuous variables in this case study. Quantitative results from filter simulations of GIP within the studied area briefly showed that GIP was reduced from an initial ∼73 Bcf (median) to ∼46 Bcf (2011), representing a 37 % decrease and varying spatially through degasification. It is forecasted that there will be an additional ∼2 Bcf reduction in methane quantity between 2011 and 2015. This study and presented results showed that the applied methodology and utilized techniques can be used to map GIP and its change within coal seams after degasification, which can further be used for ventilation design for methane control in coal mines.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050216398','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050216398"><span>Application of a Constant Gain Extended Kalman Filter for In-Flight Estimation of Aircraft Engine Performance Parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kobayashi, Takahisa; Simon, Donald L.; Litt, Jonathan S.</p> <p>2005-01-01</p> <p>An approach based on the Constant Gain Extended Kalman Filter (CGEKF) technique is investigated for the in-flight estimation of non-measurable performance parameters of aircraft engines. Performance parameters, such as thrust and stall margins, provide crucial information for operating an aircraft engine in a safe and efficient manner, but they cannot be directly measured during flight. A technique to accurately estimate these parameters is, therefore, essential for further enhancement of engine operation. In this paper, a CGEKF is developed by combining an on-board engine model and a single Kalman gain matrix. In order to make the on-board engine model adaptive to the real engine s performance variations due to degradation or anomalies, the CGEKF is designed with the ability to adjust its performance through the adjustment of artificial parameters called tuning parameters. With this design approach, the CGEKF can maintain accurate estimation performance when it is applied to aircraft engines at offnominal conditions. The performance of the CGEKF is evaluated in a simulation environment using numerous component degradation and fault scenarios at multiple operating conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007SPIE.6514E..2ZH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007SPIE.6514E..2ZH"><span>Detection of retinal nerve fiber layer defects in retinal fundus images using Gabor filtering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hayashi, Yoshinori; Nakagawa, Toshiaki; Hatanaka, Yuji; Aoyama, Akira; Kakogawa, Masakatsu; Hara, Takeshi; Fujita, Hiroshi; Yamamoto, Tetsuya</p> <p>2007-03-01</p> <p>Retinal nerve fiber layer defect (NFLD) is one of the most important findings for the diagnosis of glaucoma reported by ophthalmologists. However, such changes could be overlooked, especially in mass screenings, because ophthalmologists have limited time to search for a number of different changes for the diagnosis of various diseases such as diabetes, hypertension and glaucoma. Therefore, the use of a computer-aided detection (CAD) system can improve the results of diagnosis. In this work, a technique for the detection of NFLDs in retinal fundus images is proposed. In the preprocessing step, blood vessels are "erased" from the original retinal fundus image by using morphological filtering. The preprocessed image is then transformed into a rectangular array. NFLD regions are observed as vertical dark bands in the transformed image. Gabor filtering is then applied to enhance the vertical dark bands. False positives (FPs) are reduced by a rule-based method which uses the information of the location and the width of each candidate region. The detected regions are back-transformed into the original configuration. In this preliminary study, 71% of NFLD regions are detected with average number of FPs of 3.2 per image. In conclusion, we have developed a technique for the detection of NFLDs in retinal fundus images. Promising results have been obtained in this initial study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCoPh.347..207M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCoPh.347..207M"><span>A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meldi, M.; Poux, A.</p> <p>2017-10-01</p> <p>A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29693787','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29693787"><span>Integrated strategy based on high-resolution mass spectrometry coupled with multiple data mining techniques for the metabolic profiling of Xanthoceras sorbifolia Bunge husks in rat plasma, urine, and feces.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rong, Weiwei; Guo, Sirui; Ding, Kewen; Yuan, Ziyue; Li, Qing; Bi, Kaishun</p> <p>2018-04-25</p> <p>An integrated strategy based on high-resolution mass spectrometry coupled with multiple data mining techniques was developed to screen the metabolites in rat biological fluids after the oral administration of Xanthoceras sorbifolia Bunge husks. Mass defect filtering, product ion filtering, and neutral loss filtering were applied to detect metabolites from the complex matrix. As a result, 55 metabolites were tentatively identified, among which 45 barrigenol-type triterpenoid metabolites were detected in the feces, and six flavonoids and four coumarins metabolites were in the urine. Moreover, eight prototype constituents in plasma, 36 in urine and 23 in feces were also discovered. Due to the poor bioavailability of barrigenol type triterpenoids, most of them were metabolized by intestinal flora. Phase I metabolic reactions such as deglycosylation, oxidation, demethylation, dehydrogenation, and internal hydrolysis were supposed to be their principal metabolic pathways. Coumarins were found in all the biosamples, whereas flavonoids were mainly in the urine. Unlike the saponins, they were mainly metabolized through phase II metabolic reactions like glucuronidation and sulfonation, which made them eliminated more easily by urine. This work suggested the metabolic profile of X. sorbifolia husks for the first time, which will be very valuable for its further development. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22250741-metallic-artifact-mitigation-organ-constrained-tissue-assignment-monte-carlo-calculations-permanent-implant-lung-brachytherapy','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22250741-metallic-artifact-mitigation-organ-constrained-tissue-assignment-monte-carlo-calculations-permanent-implant-lung-brachytherapy"><span>Metallic artifact mitigation and organ-constrained tissue assignment for Monte Carlo calculations of permanent implant lung brachytherapy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Sutherland, J. G. H.; Miksys, N.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca</p> <p>2014-01-15</p> <p>Purpose: To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Methods: Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxelmore » and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for{sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Results: Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for{sup 103}Pd seeds and smallest but still considerable differences for {sup 131}Cs seeds. Conclusions: Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18323124','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18323124"><span>Sustainable colloidal-silver-impregnated ceramic filter for point-of-use water treatment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Oyanedel-Craver, Vinka A; Smith, James A</p> <p>2008-02-01</p> <p>Cylindrical colloidal-silver-impregnated ceramic filters for household (point-of-use) water treatment were manufactured and tested for performance in the laboratory with respect to flow rate and bacteria transport. Filters were manufactured by combining clay-rich soil with water, grog (previously fired clay), and flour, pressing them into cylinders, and firing them at 900 degrees C for 8 h. The pore-size distribution of the resulting ceramic filters was quantified by mercury porosimetry. Colloidal silver was applied to filters in different quantities and ways (dipping and painting). Filters were also tested without any colloidal-silver application. Hydraulic conductivity of the filters was quantified using changing-head permeability tests. [3H]H2O water was used as a conservative tracer to quantify advection velocities and the coefficient of hydrodynamic dispersion. Escherichia coli (E. coli) was used to quantify bacterial transport through the filters. Hydraulic conductivity and pore-size distribution varied with filter composition; hydraulic conductivities were on the order of 10(-5) cm/s and more than 50% of the pores for each filter had diameters ranging from 0.02 to 15 microm. The filters removed between 97.8% and 100% of the applied bacteria; colloidal-silver treatments improved filter performance, presumably by deactivation of bacteria. The quantity of colloidal silver applied per filter was more important to bacteria removal than the method of application. Silver concentrations in effluent filter water were initially greater than 0.1 mg/L, but dropped below this value after 200 min of continuous operation. These results indicate that colloidal-silver-impregnated ceramic filters, which can be made using primarily local materials and labor, show promise as an effective and sustainable point-of-use water treatment technology for the world's poorest communities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=182871','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=182871"><span>Development and application of new positively charged filters for recovery of bacteriophages from water.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Borrego, J J; Cornax, R; Preston, D R; Farrah, S R; McElhaney, B; Bitton, G</p> <p>1991-01-01</p> <p>Electronegative and electropositive filters were compared for the recovery of indigenous bacteriophages from water samples, using the VIRADEL technique. Fiber glass and diatomaceous earth filters displayed low adsorption and recovery, but an important increase of the adsorption percentage was observed when the filters were treated with cationic polymers (about 99% adsorption). A new methodology of virus elution was developed in this study, consisting of the slow passage of the eluent through the filter, thus increasing the contact time between eluent and virus adsorbed on the filters. The use of this technique allows a maximum recovery of 71.2% compared with 46.7% phage recovery obtained by the standard elution procedure. High percentages (over 83%) of phage adsorption were obtained with different filters from 1-liter aliquots of the samples, except for Virosorb 1-MDS filters (between 1.6 and 32% phage adsorption). Phage recovery by using the slow passing of the eluent depended on the filter type, with recovery ranging between 1.6% for Virosorb 1-MDS filters treated with polyethyleneimine and 103.2% for diatomaceous earth filters treated with 0.1% Nalco. PMID:2059044</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1180754','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1180754"><span>A tool for filtering information in complex systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.</p> <p>2005-01-01</p> <p>We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. PMID:16027373</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16027373','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16027373"><span>A tool for filtering information in complex systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tumminello, M; Aste, T; Di Matteo, T; Mantegna, R N</p> <p>2005-07-26</p> <p>We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22256174','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22256174"><span>Frequency tracking and variable bandwidth for line noise filtering without a reference.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kelly, John W; Collinger, Jennifer L; Degenhart, Alan D; Siewiorek, Daniel P; Smailagic, Asim; Wang, Wei</p> <p>2011-01-01</p> <p>This paper presents a method for filtering line noise using an adaptive noise canceling (ANC) technique. This method effectively eliminates the sinusoidal contamination while achieving a narrower bandwidth than typical notch filters and without relying on the availability of a noise reference signal as ANC methods normally do. A sinusoidal reference is instead digitally generated and the filter efficiently tracks the power line frequency, which drifts around a known value. The filter's learning rate is also automatically adjusted to achieve faster and more accurate convergence and to control the filter's bandwidth. In this paper the focus of the discussion and the data will be electrocorticographic (ECoG) neural signals, but the presented technique is applicable to other recordings.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160014025','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160014025"><span>Comparison of Factorization-Based Filtering for Landing Navigation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>McCabe, James S.; Brown, Aaron J.; DeMars, Kyle J.; Carson, John M., III</p> <p>2017-01-01</p> <p>This paper develops and analyzes methods for fusing inertial navigation data with external data, such as data obtained from an altimeter and a star camera. The particular filtering techniques are based upon factorized forms of the Kalman filter, specifically the UDU and Cholesky factorizations. The factorized Kalman filters are utilized to ensure numerical stability of the navigation solution. Simulations are carried out to compare the performance of the different approaches along a lunar descent trajectory using inertial and external data sources. It is found that the factorized forms improve upon conventional filtering techniques in terms of ensuring numerical stability for the investigated landing navigation scenario.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4165202','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4165202"><span>SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930022652','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930022652"><span>Design of coupled mace filters for optical pattern recognition using practical spatial light modulators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rajan, P. K.; Khan, Ajmal</p> <p>1993-01-01</p> <p>Spatial light modulators (SLMs) are being used in correlation-based optical pattern recognition systems to implement the Fourier domain filters. Currently available SLMs have certain limitations with respect to the realizability of these filters. Therefore, it is necessary to incorporate the SLM constraints in the design of the filters. The design of a SLM-constrained minimum average correlation energy (SLM-MACE) filter using the simulated annealing-based optimization technique was investigated. The SLM-MACE filter was synthesized for three different types of constraints. The performance of the filter was evaluated in terms of its recognition (discrimination) capabilities using computer simulations. The correlation plane characteristics of the SLM-MACE filter were found to be reasonably good. The SLM-MACE filter yielded far better results than the analytical MACE filter implemented on practical SLMs using the constrained magnitude technique. Further, the filter performance was evaluated in the presence of noise in the input test images. This work demonstrated the need to include the SLM constraints in the filter design. Finally, a method is suggested to reduce the computation time required for the synthesis of the SLM-MACE filter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25254240','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25254240"><span>SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Youn, Seongwook</p> <p>2014-01-01</p> <p>Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26690154','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26690154"><span>An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu</p> <p>2015-12-04</p> <p>With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18972657','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18972657"><span>Automatic detection of magnetic flux emergings in the solar atmosphere from full-disk magnetogram sequences.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fu, Gang; Shih, Frank Y; Wang, Haimin</p> <p>2008-11-01</p> <p>In this paper, we present a novel method to detect Emerging Flux Regions (EFRs) in the solar atmosphere from consecutive full-disk Michelson Doppler Imager (MDI) magnetogram sequences. To our knowledge, this is the first developed technique for automatically detecting EFRs. The method includes several steps. First, the projection distortion on the MDI magnetograms is corrected. Second, the bipolar regions are extracted by applying multiscale circular harmonic filters. Third, the extracted bipolar regions are traced in consecutive MDI frames by Kalman filter as candidate EFRs. Fourth, the properties, such as positive and negative magnetic fluxes and distance between two polarities, are measured in each frame. Finally, a feature vector is constructed for each bipolar region using the measured properties, and the Support Vector Machine (SVM) classifier is applied to distinguish EFRs from other regions. Experimental results show that the detection rate of EFRs is 96.4% and of non-EFRs is 98.0%, and the false alarm rate is 25.7%, based on all the available MDI magnetograms in 2001 and 2002.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JaJAP..57d6701A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JaJAP..57d6701A"><span>Development of a fountain detector for spectroscopy of secondary electrons in scanning electron microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Agemura, Toshihide; Kimura, Takashi; Sekiguchi, Takashi</p> <p>2018-04-01</p> <p>The low-pass secondary electron (SE) detector, the so-called “fountain detector (FD)”, for scanning electron microscopy has high potential for application to the imaging of low-energy SEs. Low-energy SE imaging may be used for detecting the surface potential variations of a specimen. However, the detected SEs include a certain fraction of tertiary electrons (SE3s) because some of the high-energy backscattered electrons hit the grid to yield SE3s. We have overcome this difficulty by increasing the aperture ratio of the bias and ground grids and using the lock-in technique, in which the AC field with the DC offset was applied on the bias grid. The energy-filtered SE images of a 4H-SiC p-n junction show complex behavior according to the grid bias. These observations are clearly explained by the variations of Auger spectra across the p-n junction. The filtered SE images taken with the FD can be applied to observing the surface potential variation of specimens.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013Metro..50..307U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013Metro..50..307U"><span>Uncertainty estimation and multi sensor fusion for kinematic laser tracker measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ulrich, Thomas</p> <p>2013-08-01</p> <p>Laser trackers are widely used to measure kinematic tasks such as tracking robot movements. Common methods to evaluate the uncertainty in the kinematic measurement include approximations specified by the manufacturers, various analytical adjustment methods and the Kalman filter. In this paper a new, real-time technique is proposed, which estimates the 4D-path (3D-position + time) uncertainty of an arbitrary path in space. Here a hybrid system estimator is applied in conjunction with the kinematic measurement model. This method can be applied to processes, which include various types of kinematic behaviour, constant velocity, variable acceleration or variable turn rates. The new approach is compared with the Kalman filter and a manufacturer's approximations. The comparison was made using data obtained by tracking an industrial robot's tool centre point with a Leica laser tracker AT901 and a Leica laser tracker LTD500. It shows that the new approach is more appropriate to analysing kinematic processes than the Kalman filter, as it reduces overshoots and decreases the estimated variance. In comparison with the manufacturer's approximations, the new approach takes account of kinematic behaviour with an improved description of the real measurement process and a reduction in estimated variance. This approach is therefore well suited to the analysis of kinematic processes with unknown changes in kinematic behaviour as well as the fusion among laser trackers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JOUC...17..118W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JOUC...17..118W"><span>Intelligent identification of remnant ridge edges in region west of Yongxing Island, South China Sea</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Weiwei; Guo, Jing; Cai, Guanqiang; Wang, Dawei</p> <p>2018-02-01</p> <p>Edge detection enables identification of geomorphologic unit boundaries and thus assists with geomorphical mapping. In this paper, an intelligent edge identification method is proposed and image processing techniques are applied to multi-beam bathymetry data. To accomplish this, a color image is generated by the bathymetry, and a weighted method is used to convert the color image to a gray image. As the quality of the image has a significant influence on edge detection, different filter methods are applied to the gray image for de-noising. The peak signal-to-noise ratio and mean square error are calculated to evaluate which filter method is most appropriate for depth image filtering and the edge is subsequently detected using an image binarization method. Traditional image binarization methods cannot manage the complicated uneven seafloor, and therefore a binarization method is proposed that is based on the difference between image pixel values; the appropriate threshold for image binarization is estimated according to the probability distribution of pixel value differences between two adjacent pixels in horizontal and vertical directions, respectively. Finally, an eight-neighborhood frame is adopted to thin the binary image, connect the intermittent edge, and implement contour extraction. Experimental results show that the method described here can recognize the main boundaries of geomorphologic units. In addition, the proposed automatic edge identification method avoids use of subjective judgment, and reduces time and labor costs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JMP....58f3517G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JMP....58f3517G"><span>Non-Markovian quantum feedback networks II: Controlled flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gough, John E.</p> <p>2017-06-01</p> <p>The concept of a controlled flow of a dynamical system, especially when the controlling process feeds information back about the system, is of central importance in control engineering. In this paper, we build on the ideas presented by Bouten and van Handel [Quantum Stochastics and Information: Statistics, Filtering and Control (World Scientific, 2008)] and develop a general theory of quantum feedback. We elucidate the relationship between the controlling processes, Z, and the measured processes, Y, and to this end we make a distinction between what we call the input picture and the output picture. We should note that the input-output relations for the noise fields have additional terms not present in the standard theory but that the relationship between the control processes and measured processes themselves is internally consistent—we do this for the two main cases of quadrature measurement and photon-counting measurement. The theory is general enough to include a modulating filter which post-processes the measurement readout Y before returning to the system. This opens up the prospect of applying very general engineering feedback control techniques to open quantum systems in a systematic manner, and we consider a number of specific modulating filter problems. Finally, we give a brief argument as to why most of the rules for making instantaneous feedback connections [J. Gough and M. R. James, Commun. Math. Phys. 287, 1109 (2009)] ought to apply for controlled dynamical networks as well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19720013397','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19720013397"><span>Contamination control through filtration of microorganisms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stabekis, P. D.; Lyle, R. G.</p> <p>1972-01-01</p> <p>A description is given of the various kinds of gas and liquid filters used in decontamination and sterilization procedures. Also discussed are filtration mechanisms, characteristics of filter materials, and the factors affecting filter performance. Summaries are included for filter testing and evaluation techniques and the possible application of the filters to spacecraft sterilization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900019196','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900019196"><span>Velocity filtering applied to optical flow calculations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barniv, Yair</p> <p>1990-01-01</p> <p>Optical flow is a method by which a stream of two-dimensional images obtained from a forward-looking passive sensor is used to map the three-dimensional volume in front of a moving vehicle. Passive ranging via optical flow is applied here to the helicopter obstacle-avoidance problem. Velocity filtering is used as a field-based method to determine range to all pixels in the initial image. The theoretical understanding and performance analysis of velocity filtering as applied to optical flow is expanded and experimental results are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFM.T33D2442K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFM.T33D2442K"><span>Magnetotelluric measurements across the southern Barberton greenstone belt, South Africa: data improving strategies and 2-D inversion results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kutter, S.; Chen, X.; Weckmann, U.</p> <p>2011-12-01</p> <p>Magnetotelluric (MT) measurements in areas with electromagnetic (EM) noise sources such as electric fences, power and railway lines pose severe challenges to the standard processing procedures. In order to significantly improve the data quality advanced filtering and processing techniques need to be applied. The presented 5-component MT data set from two field campaigns in 2009 and 2010 in the Barberton/Badplaas area, South Africa, was acquired within the framework of the German-South African geo-scientific research initiative Inkaba yeAfrica. Approximately 200 MT sites aligned along six profiles provide a good areal coverage of the southern part of the Barberton Greenstone Belt (BGB). Since it is one of the few remaining well-preserved geological formations from the Archean, it presents an ideal area to study the tectonic evolution and the role of plate tectonics on Early Earth. Comparing the electric properties, the surrounding high and low grade metamorphic rocks are characteristically resistive whereas mineralized shear zones are possible areas of higher electrical conductivity. Mapping their depth extension is a crucial step towards understanding the formation and the evolution of the BGB. Unfortunately, in the measurement area numerous noise sources were active, producing severe spikes and steps in the EM fields. These disturbances mainly affect long periods which are needed for resolving the deepest structures. The Remote Reference technique as well as two filtering techniques are applied to improve the data in different period ranges. Adjusting their parameters for each site is necessary to obtain the best possible results. The improved data set is used for two-dimensional inversion studies for the six profiles applying the RLM2DI algorithm by Rodi and Mackie (2001, implemented in WinGlink). In the models, areas with higher conductivity can be traced beneath known faults throughout the entire array along different profiles. Resistive zones seem to correlate well with plutonic intrusions.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJE...103.1776S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJE...103.1776S"><span>Miniaturized dielectric waveguide filters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sandhu, Muhammad Y.; Hunter, Ian C.</p> <p>2016-10-01</p> <p>Design techniques for a new class of integrated monolithic high-permittivity ceramic waveguide filters are presented. These filters enable a size reduction of 50% compared to air-filled transverse electromagnetic filters with the same unloaded Q-factor. Designs for Chebyshev and asymmetric generalised Chebyshev filter and a diplexer are presented with experimental results for an 1800 MHz Chebyshev filter and a 1700 MHz generalised Chebyshev filter showing excellent agreement with theory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JHyd..560..127M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JHyd..560..127M"><span>Constraining the ensemble Kalman filter for improved streamflow forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James</p> <p>2018-05-01</p> <p>Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, which has a profound effect on filter performance. We note an interesting tension exists between specifying an error which reflects known uncertainties and errors in the measurement versus an error that allows "optimal" filter updating.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040015228&hterms=heating+global&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dheating%2Bglobal','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040015228&hterms=heating+global&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dheating%2Bglobal"><span>A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming</p> <p>2003-01-01</p> <p>This paper presents the development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during summer 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR- based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and t i f m rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22974243','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22974243"><span>Comparison of filtering methods for extracellular gastric slow wave recordings.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Paskaranandavadivel, Niranchan; O'Grady, Gregory; Du, Peng; Cheng, Leo K</p> <p>2013-01-01</p> <p>Extracellular recordings are used to define gastric slow wave propagation. Signal filtering is a key step in the analysis and interpretation of extracellular slow wave data; however, there is controversy and uncertainty regarding the appropriate filtering settings. This study investigated the effect of various standard filters on the morphology and measurement of extracellular gastric slow waves. Experimental extracellular gastric slow waves were recorded from the serosal surface of the stomach from pigs and humans. Four digital filters: finite impulse response filter (0.05-1 Hz); Savitzky-Golay filter (0-1.98 Hz); Bessel filter (2-100 Hz); and Butterworth filter (5-100 Hz); were applied on extracellular gastric slow wave signals to compare the changes temporally (morphology of the signal) and spectrally (signals in the frequency domain). The extracellular slow wave activity is represented in the frequency domain by a dominant frequency and its associated harmonics in diminishing power. Optimal filters apply cutoff frequencies consistent with the dominant slow wave frequency (3-5 cpm) and main harmonics (up to ≈ 2 Hz). Applying filters with cutoff frequencies above or below the dominant and harmonic frequencies was found to distort or eliminate slow wave signal content. Investigators must be cognizant of these optimal filtering practices when detecting, analyzing, and interpreting extracellular slow wave recordings. The use of frequency domain analysis is important for identifying the dominant and harmonics of the signal of interest. Capturing the dominant frequency and major harmonics of slow wave is crucial for accurate representation of slow wave activity in the time domain. Standardized filter settings should be determined. © 2012 Blackwell Publishing Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992moca.conf...30P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992moca.conf...30P"><span>Proceedings of the Conference on Moments and Signal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Purdue, P.; Solomon, H.</p> <p>1992-09-01</p> <p>The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28361357','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28361357"><span>Increasing accuracy of pulse transit time measurements by automated elimination of distorted photoplethysmography waves.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van Velzen, Marit H N; Loeve, Arjo J; Niehof, Sjoerd P; Mik, Egbert G</p> <p>2017-11-01</p> <p>Photoplethysmography (PPG) is a widely available non-invasive optical technique to visualize pressure pulse waves (PWs). Pulse transit time (PTT) is a physiological parameter that is often derived from calculations on ECG and PPG signals and is based on tightly defined characteristics of the PW shape. PPG signals are sensitive to artefacts. Coughing or movement of the subject can affect PW shapes that much that the PWs become unsuitable for further analysis. The aim of this study was to develop an algorithm that automatically and objectively eliminates unsuitable PWs. In order to develop a proper algorithm for eliminating unsuitable PWs, a literature study was conducted. Next, a '7Step PW-Filter' algorithm was developed that applies seven criteria to determine whether a PW matches the characteristics required to allow PTT calculation. To validate whether the '7Step PW-Filter' eliminates only and all unsuitable PWs, its elimination results were compared to the outcome of manual elimination of unsuitable PWs. The '7Step PW-Filter' had a sensitivity of 96.3% and a specificity of 99.3%. The overall accuracy of the '7Step PW-Filter' for detection of unsuitable PWs was 99.3%. Compared to manual elimination, using the '7Step PW-Filter' reduces PW elimination times from hours to minutes and helps to increase the validity, reliability and reproducibility of PTT data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27974668','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27974668"><span>Importance of Adjunct Delivery Techniques to Optimize Deployment Success of Distal Protection Filters During Vein Graft Intervention.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kaliyadan, Antony G; Chawla, Harnish; Fischman, David L; Ruggiero, Nicholas; Gannon, Michael; Walinsky, Paul; Savage, Michael P</p> <p>2017-02-01</p> <p>This study assessed the impact of adjunct delivery techniques on the deployment success of distal protection filters in saphenous vein grafts (SVGs). Despite their proven clinical benefit, distal protection devices are underutilized in SVG interventions. Deployment of distal protection filters can be technically challenging in the presence of complex anatomy. Techniques that facilitate the delivery success of these devices could potentially improve clinical outcomes and promote greater use of distal protection. Outcomes of 105 consecutive SVG interventions with attempted use of a FilterWire distal protection device (Boston Scientific) were reviewed. In patients in whom filter delivery initially failed, the success of attempted redeployment using adjunct delivery techniques was assessed. Two strategies were utilized sequentially: (1) a 0.014" moderate-stiffness hydrophilic guidewire was placed first to function as a parallel buddy wire to support subsequent FilterWire crossing; and (2) if the buddy-wire approach failed, predilation with a 2.0 mm balloon at low pressure was performed followed by reattempted filter delivery. The study population consisted of 80 men and 25 women aged 73 ± 10 years. Mean SVG age was 14 ± 6 years. Complex disease (American College of Cardiology/American Heart Association class B2 or C) was present in 92%. Initial delivery of the FilterWire was successful in 82/105 patients (78.1%). Of the 23 patients with initial failed delivery, 8 (35%) had successful deployment with a buddy wire alone, 7 (30%) had successful deployment with balloon predilation plus buddy wire, 4 (17%) had failed reattempt at deployment despite adjunct maneuvers, and in 4 (17%) no additional attempts at deployment were made at the operator's discretion. Deployment failure was reduced from 21.9% initially to 7.6% after use of adjunct delivery techniques (P<.01). No adverse events were observed with these measures. Deployment of distal protection devices can be technically difficult with complex SVG disease. Adjunct delivery techniques are important to optimize deployment success of distal protection filters during SVG intervention.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFM.P23A1234S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFM.P23A1234S"><span>CRISM Hyperspectral Data Filtering with Application to MSL Landing Site Selection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Seelos, F. P.; Parente, M.; Clark, T.; Morgan, F.; Barnouin-Jha, O. S.; McGovern, A.; Murchie, S. L.; Taylor, H.</p> <p>2009-12-01</p> <p>We report on the development and implementation of a custom filtering procedure for Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) IR hyperspectral data that is suitable for incorporation into the CRISM Reduced Data Record (RDR) calibration pipeline. Over the course of the Mars Reconnaissance Orbiter (MRO) Primary Science Phase (PSP) and the ongoing Extended Science Phase (ESP) CRISM has operated with an IR detector temperature between ~107 K and ~127 K. This ~20 K range in operational temperature has resulted in variable data quality, with observations acquired at higher detector temperatures exhibiting a marked increase in both systematic and stochastic noise. The CRISM filtering procedure consists of two main data processing capabilities. The primary systematic noise component in CRISM IR data appears as along track or column oriented striping. This is addressed by the robust derivation and application of an inter-column ratio correction frame. The correction frame is developed through the serial evaluation of band specific column ratio statistics and so does not compromise the spectral fidelity of the image cube. The dominant CRISM IR stochastic noise components appear as isolated data spikes or column oriented segments of variable length with erroneous data values. The non-systematic noise is identified and corrected through the application of an iterative-recursive kernel modeling procedure which employs a formal statistical outlier test as the iteration control and recursion termination criterion. This allows the filtering procedure to make a statistically supported determination between high frequency (spatial/spectral) signal and high frequency noise based on the information content of a given multidimensional data kernel. The governing statistical test also allows the kernel filtering procedure to be self regulating and adaptive to the intrinsic noise level in the data. The CRISM IR filtering procedure is scheduled to be incorporated into the next augmentation of the CRISM IR calibration (version 3). The filtering algorithm will be applied to the I/F data (IF) delivered to the Planetary Data System (PDS), but the radiance on sensor data (RA) will remain unfiltered. The development of CRISM hyperspectral analysis products in support of the Mars Science Laboratory (MSL) landing site selection process has motivated the advance of CRISM-specific data processing techniques. The quantitative results of the CRISM IR filtering procedure as applied to CRISM observations acquired in support of MSL landing site selection will be presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19720018151','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19720018151"><span>The heart sound preprocessor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chen, W. T.</p> <p>1972-01-01</p> <p>Technology developed for signal and data processing was applied to diagnostic techniques in the area of phonocardiography (pcg), the graphic recording of the sounds of the heart generated by the functioning of the aortic and ventricular valves. The relatively broad bandwidth of the PCG signal (20 to 2000 Hz) was reduced to less than 100 Hz by the use of a heart sound envelope. The process involves full-wave rectification of the PCG signal, envelope detection of the rectified wave, and low pass filtering of the resultant envelope.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19740056372&hterms=regeneration&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dregeneration','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19740056372&hterms=regeneration&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dregeneration"><span>Development of a filter regeneration system for advanced spacecraft fluid systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Behrend, A. F., Jr.; Descamp, V. A.</p> <p>1974-01-01</p> <p>The development of a filter regeneration system for efficiently cleaning fluid particulate filters is presented. Based on a backflush/jet impingement technique, the regeneration system demonstrated a cleaning efficiency of 98.7 to 100%. The operating principles and design features are discussed with emphasis on the primary system components that include a regenerable filter, vortex particle separator, and zero-g particle trap. Techniques and equipment used for ground and zero-g performance tests are described. Test results and conclusions, as well as possible areas for commercial application, are included.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20414352','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20414352"><span>Molecular surface mesh generation by filtering electron density map.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Giard, Joachim; Macq, Benoît</p> <p>2010-01-01</p> <p>Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT.......310S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT.......310S"><span>System health monitoring using multiple-model adaptive estimation techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sifford, Stanley Ryan</p> <p></p> <p>Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012ISPAr39B7..317E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012ISPAr39B7..317E"><span>Cest Analysis: Automated Change Detection from Very-High Remote Sensing Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ehlers, M.; Klonus, S.; Jarmer, T.; Sofina, N.; Michel, U.; Reinartz, P.; Sirmacek, B.</p> <p>2012-08-01</p> <p>A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye) new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST) analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT) and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment) with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST) of the change algorithms is applied to calculate the probability of change for a particular location. CEST was tested with high-resolution satellite images of the crisis areas of Darfur (Sudan). CEST results are compared with a number of standard algorithms for automated change detection such as image difference, image ratioe, principal component analysis, delta cue technique and post classification change detection. The new combined method shows superior results averaging between 45% and 15% improvement in accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSV...418..184S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSV...418..184S"><span>Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shrivastava, Akash; Mohanty, A. R.</p> <p>2018-03-01</p> <p>This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JInst..11C4019L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JInst..11C4019L"><span>New analysis methods to push the boundaries of diagnostic techniques in the environmental sciences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lungaroni, M.; Murari, A.; Peluso, E.; Gelfusa, M.; Malizia, A.; Vega, J.; Talebzadeh, S.; Gaudio, P.</p> <p>2016-04-01</p> <p>In the last years, new and more sophisticated measurements have been at the basis of the major progress in various disciplines related to the environment, such as remote sensing and thermonuclear fusion. To maximize the effectiveness of the measurements, new data analysis techniques are required. First data processing tasks, such as filtering and fitting, are of primary importance, since they can have a strong influence on the rest of the analysis. Even if Support Vector Regression is a method devised and refined at the end of the 90s, a systematic comparison with more traditional non parametric regression methods has never been reported. In this paper, a series of systematic tests is described, which indicates how SVR is a very competitive method of non-parametric regression that can usefully complement and often outperform more consolidated approaches. The performance of Support Vector Regression as a method of filtering is investigated first, comparing it with the most popular alternative techniques. Then Support Vector Regression is applied to the problem of non-parametric regression to analyse Lidar surveys for the environments measurement of particulate matter due to wildfires. The proposed approach has given very positive results and provides new perspectives to the interpretation of the data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MNRAS.471.3323V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MNRAS.471.3323V"><span>Real-time colouring and filtering with graphics shaders</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vohl, D.; Fluke, C. J.; Barnes, D. G.; Hassan, A. H.</p> <p>2017-11-01</p> <p>Despite the popularity of the Graphics Processing Unit (GPU) for general purpose computing, one should not forget about the practicality of the GPU for fast scientific visualization. As astronomers have increasing access to three-dimensional (3D) data from instruments and facilities like integral field units and radio interferometers, visualization techniques such as volume rendering offer means to quickly explore spectral cubes as a whole. As most 3D visualization techniques have been developed in fields of research like medical imaging and fluid dynamics, many transfer functions are not optimal for astronomical data. We demonstrate how transfer functions and graphics shaders can be exploited to provide new astronomy-specific explorative colouring methods. We present 12 shaders, including four novel transfer functions specifically designed to produce intuitive and informative 3D visualizations of spectral cube data. We compare their utility to classic colour mapping. The remaining shaders highlight how common computation like filtering, smoothing and line ratio algorithms can be integrated as part of the graphics pipeline. We discuss how this can be achieved by utilizing the parallelism of modern GPUs along with a shading language, letting astronomers apply these new techniques at interactive frame rates. All shaders investigated in this work are included in the open source software shwirl (Vohl 2017).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920000723&hterms=micro+bacteria&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dmicro%2Bbacteria','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920000723&hterms=micro+bacteria&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dmicro%2Bbacteria"><span>Disinfecting Filters For Recirculated Air</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pilichi, Carmine A.</p> <p>1992-01-01</p> <p>Simple treatment disinfects air filters by killing bacteria, algae, fungi, mycobacteria, viruses, spores, and any other micro-organisms filters might harbor. Concept applied to reusable stainless-steel wire mesh filters and disposable air filters. Treatment used on filters in air-circulation systems in spacecraft, airplanes, other vehicles, and buildings to help prevent spread of colds, sore throats, and more-serious illnesses.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/3909605','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/3909605"><span>Comment on Vaknine, R. and Lorenz, W.J. Lateral filtering of medical ultrasonic B-scans before image generation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dickinson, R J</p> <p>1985-04-01</p> <p>In a recent paper, Vaknine and Lorenz discuss the merits of lateral deconvolution of demodulated B-scans. While this technique will decrease the lateral blurring of single discrete targets, such as the diaphragm in their figure 3, it is inappropriate to apply the method to the echoes arising from inhomogeneous structures such as soft tissue. In this latter case, the echoes from individual scatterers within the resolution cell of the transducer interfere to give random fluctuations in received echo amplitude termed speckle. Although his process can be modeled as a linear convolution similar to that of conventional image formation theory, the process of demodulation is a nonlinear process which loses the all-important phase information, and prevents the subsequent restoration of the image by Wiener filtering, itself a linear process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090025877','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090025877"><span>Advanced Signal Processing Techniques Applied to Terahertz Inspections on Aerospace Foams</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Trinh, Long Buu</p> <p>2009-01-01</p> <p>The space shuttle's external fuel tank is thermally insulated by the closed cell foams. However, natural voids composed of air and trapped gas are found as by-products when the foams are cured. Detection of foam voids and foam de-bonding is a formidable task owing to the small index of refraction contrast between foam and air (1.04:1). In the presence of a denser binding matrix agent that bonds two different foam materials, time-differentiation of filtered terahertz signals can be employed to magnify information prior to the main substrate reflections. In the absence of a matrix binder, de-convolution of the filtered time differential terahertz signals is performed to reduce the masking effects of antenna ringing. The goal is simply to increase probability of void detection through image enhancement and to determine the depth of the void.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008BAAA...51..335D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008BAAA...51..335D"><span>Distribución Espacial de Ancho Equivalente del Triplete del CaII a partir de Imágenes GMOS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Díaz, R. J.; Mast, D.</p> <p></p> <p>Using Gemini+GMOS imagery obtained through the filters i, z and CaT, we developed a technique for estimating the value of the Ca II triplet (CaT) equivalent width (EW). The map generated through arithmetic operations with the near infrared images was calibrated with long slit spectra obtained with REOSC spectrograph at CASLEO. We apply this technique to the study of M 83 central region and present the preliminary results on the spatial distribution of the EW(CaT) within an area of 40 per 40 square arcsec around the double nucleus of M 83, with a spatial resolution of 0.8 arcsec. FULL TEXT IN SPANISH.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OptCo.405..334A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OptCo.405..334A"><span>Phase-step retrieval for tunable phase-shifting algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ayubi, Gastón A.; Duarte, Ignacio; Perciante, César D.; Flores, Jorge L.; Ferrari, José A.</p> <p>2017-12-01</p> <p>Phase-shifting (PS) is a well-known technique for phase retrieval in interferometry, with applications in deflectometry and 3D-profiling, which requires a series of intensity measurements with certain phase-steps. Usually the phase-steps are evenly spaced, and its knowledge is crucial for the phase retrieval. In this work we present a method to extract the phase-step between consecutive interferograms. We test the proposed technique with images corrupted by additive noise. The results were compared with other known methods. We also present experimental results showing the performance of the method when spatial filters are applied to the interferograms and the effect that they have on their relative phase-steps.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17688207','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17688207"><span>Spatially variant apodization for squinted synthetic aperture radar images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Castillo-Rubio, Carlos F; Llorente-Romano, Sergio; Burgos-García, Mateo</p> <p>2007-08-01</p> <p>Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4029659','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4029659"><span>A Novel Approach to ECG Classification Based upon Two-Layered HMMs in Body Sensor Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liang, Wei; Zhang, Yinlong; Tan, Jindong; Li, Yang</p> <p>2014-01-01</p> <p>This paper presents a novel approach to ECG signal filtering and classification. Unlike the traditional techniques which aim at collecting and processing the ECG signals with the patient being still, lying in bed in hospitals, our proposed algorithm is intentionally designed for monitoring and classifying the patient's ECG signals in the free-living environment. The patients are equipped with wearable ambulatory devices the whole day, which facilitates the real-time heart attack detection. In ECG preprocessing, an integral-coefficient-band-stop (ICBS) filter is applied, which omits time-consuming floating-point computations. In addition, two-layered Hidden Markov Models (HMMs) are applied to achieve ECG feature extraction and classification. The periodic ECG waveforms are segmented into ISO intervals, P subwave, QRS complex and T subwave respectively in the first HMM layer where expert-annotation assisted Baum-Welch algorithm is utilized in HMM modeling. Then the corresponding interval features are selected and applied to categorize the ECG into normal type or abnormal type (PVC, APC) in the second HMM layer. For verifying the effectiveness of our algorithm on abnormal signal detection, we have developed an ECG body sensor network (BSN) platform, whereby real-time ECG signals are collected, transmitted, displayed and the corresponding classification outcomes are deduced and shown on the BSN screen. PMID:24681668</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1175646','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1175646"><span>Filter and method of fabricating</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Janney, Mark A.</p> <p>2006-02-14</p> <p>A method of making a filter includes the steps of: providing a substrate having a porous surface; applying to the porous surface a coating of dry powder comprising particles to form a filter preform; and heating the filter preform to bind the substrate and the particles together to form a filter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1986MiJo...29..127M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1986MiJo...29..127M"><span>Lumped element filters for electronic warfare systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Morgan, D.; Ragland, R.</p> <p>1986-02-01</p> <p>Increasing demands which future generations of electronic warfare (EW) systems are to satisfy include a reduction in the size of the equipment. The present paper is concerned with lumped element filters which can make a significant contribution to the downsizing of advanced EW systems. Lumped element filter design makes it possible to obtain very small package sizes by utilizing classical low frequency inductive and capacitive components which are small compared to the size of a wavelength. Cost-effective, temperature-stable devices can be obtained on the basis of new design techniques. Attention is given to aspects of design flexibility, an interdigital filter equivalent circuit diagram, conditions for which the use of lumped element filters can be recommended, construction techniques, a design example, and questions regarding the application of lumped element filters to EW processing systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28073586','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28073586"><span>Experimental comparison of point-of-use filters for drinking water ultrafiltration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Totaro, M; Valentini, P; Casini, B; Miccoli, M; Costa, A L; Baggiani, A</p> <p>2017-06-01</p> <p>Waterborne pathogens such as Pseudomonas spp. and Legionella spp. may persist in hospital water networks despite chemical disinfection. Point-of-use filtration represents a physical control measure that can be applied in high-risk areas to contain the exposure to such pathogens. New technologies have enabled an extension of filters' lifetimes and have made available faucet hollow-fibre filters for water ultrafiltration. To compare point-of-use filters applied to cold water within their period of validity. Faucet hollow-fibre filters (filter A), shower hollow-fibre filters (filter B) and faucet membrane filters (filter C) were contaminated in two different sets of tests with standard bacterial strains (Pseudomonas aeruginosa DSM 939 and Brevundimonas diminuta ATCC 19146) and installed at points-of-use. Every day, from each faucet, 100 L of water was flushed. Before and after flushing, 250 mL of water was collected and analysed for microbiology. There was a high capacity of microbial retention from filter C; filter B released only low Brevundimonas spp. counts; filter A showed poor retention of both micro-organisms. Hollow-fibre filters did not show good micro-organism retention. All point-of-use filters require an appropriate maintenance of structural parameters to ensure their efficiency. Copyright © 2016 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19997343','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19997343"><span>The longitudinal offset technique for apodization of coupled resonator optical waveguide devices: concept and fabrication tolerance analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Doménech, José David; Muñoz, Pascual; Capmany, José</p> <p>2009-11-09</p> <p>In this paper, a novel technique to set the coupling constant between cells of a coupled resonator optical waveguide (CROW) device, in order to tailor the filter response, is presented. The technique is demonstrated by simulation assuming a racetrack ring resonator geometry. It consists on changing the effective length of the coupling section by applying a longitudinal offset between the resonators. On the contrary, the conventional techniques are based in the transversal change of the distance between the ring resonators, in steps that are commonly below the current fabrication resolution step (nm scale), leading to strong restrictions in the designs. The proposed longitudinal offset technique allows a more precise control of the coupling and presents an increased robustness against the fabrication limitations, since the needed resolution step is two orders of magnitude higher. Both techniques are compared in terms of the transmission esponse of CROW devices, under finite fabrication resolution steps.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005PNAS..10210421T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005PNAS..10210421T"><span>A tool for filtering information in complex systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.</p> <p>2005-07-01</p> <p>We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. This paper was submitted directly (Track II) to the PNAS office.Abbreviations: MST, minimum spanning tree; PMFG, Planar Maximally Filtered Graph; r-clique, clique of r elements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20971689','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20971689"><span>Simple and rapid analytical method for detection of amino acids in blood using blood spot on filter paper, fast-GC/MS and isotope dilution technique.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kawana, Shuichi; Nakagawa, Katsuhiro; Hasegawa, Yuki; Yamaguchi, Seiji</p> <p>2010-11-15</p> <p>A simple and rapid method for quantitative analysis of amino acids, including valine (Val), leucine (Leu), isoleucine (Ile), methionine (Met) and phenylalanine (Phe), in whole blood has been developed using GC/MS. In this method, whole blood was collected using a filter paper technique, and a 1/8 in. blood spot punch was used for sample preparation. Amino acids were extracted from the sample, and the extracts were purified using cation-exchange resins. The isotope dilution method using ²H₈-Val, ²H₃-Leu, ²H₃-Met and ²H₅-Phe as internal standards was applied. Following propyl chloroformate derivatization, the derivatives were analyzed using fast-GC/MS. The extraction recoveries using these techniques ranged from 69.8% to 87.9%, and analysis time for each sample was approximately 26 min. Calibration curves at concentrations from 0.0 to 1666.7 μmol/l for Val, Leu, Ile and Phe and from 0.0 to 333.3 μmol/l for Met showed good linearity with regression coefficients=1. The method detection limits for Val, Leu, Ile, Met and Phe were 24.2, 16.7, 8.7, 1.5 and 12.9 μmol/l, respectively. This method was applied to blood spot samples obtained from patients with phenylketonuria (PKU), maple syrup urine disease (MSUD), hypermethionine and neonatal intrahepatic cholestasis caused by citrin deficiency (NICCD), and the analysis results showed that the concentrations of amino acids that characterize these diseases were increased. These results indicate that this method provides a simple and rapid procedure for precise determination of amino acids in whole blood. Copyright © 2010 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18072488','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18072488"><span>Dual-energy approach to contrast-enhanced mammography using the balanced filter method: spectral optimization and preliminary phantom measurement.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Saito, Masatoshi</p> <p>2007-11-01</p> <p>Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity-in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:T1 scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components-acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1341817','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1341817"><span>Methods and apparatuses using filter banks for multi-carrier spread spectrum signals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A</p> <p>2017-01-31</p> <p>A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to themore » synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1257191','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1257191"><span>Methods and apparatuses using filter banks for multi-carrier spread spectrum signals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A.</p> <p>2016-06-14</p> <p>A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to themore » synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006MNRAS.373..747P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006MNRAS.373..747P"><span>High dynamic range imaging by pupil single-mode filtering and remapping</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Perrin, G.; Lacour, S.; Woillez, J.; Thiébaut, É.</p> <p>2006-12-01</p> <p>Because of atmospheric turbulence, obtaining high angular resolution images with a high dynamic range is difficult even in the near-infrared domain of wavelengths. We propose a novel technique to overcome this issue. The fundamental idea is to apply techniques developed for long baseline interferometry to the case of a single-aperture telescope. The pupil of the telescope is broken down into coherent subapertures each feeding a single-mode fibre. A remapping of the exit pupil allows interfering all subapertures non-redundantly. A diffraction-limited image with very high dynamic range is reconstructed from the fringe pattern analysis with aperture synthesis techniques, free of speckle noise. The performances of the technique are demonstrated with simulations in the visible range with an 8-m telescope. Raw dynamic ranges of 1:106 can be obtained in only a few tens of seconds of integration time for bright objects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMEP41A0594G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMEP41A0594G"><span>Filtering Raw Terrestrial Laser Scanning Data for Efficient and Accurate Use in Geomorphologic Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.</p> <p>2011-12-01</p> <p>Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JKPS...72.1078H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JKPS...72.1078H"><span>Control of the Low-energy X-rays by Using MCNP5 and Numerical Analysis for a New Concept Intra-oral X-ray Imaging System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huh, Jangyong; Ji, Yunseo; Lee, Rena</p> <p>2018-05-01</p> <p>An X-ray control algorithm to modulate the X-ray intensity distribution over the FOV (field of view) has been developed by using numerical analysis and MCNP5, a particle transport simulation code on the basis of the Monte Carlo method. X-rays, which are widely used in medical diagnostic imaging, should be controlled in order to maximize the performance of the X-ray imaging system. However, transporting X-rays, like a liquid or a gas is conveyed through a physical form such as pipes, is not possible. In the present study, an X-ray control algorithm and technique to uniformize the Xray intensity projected on the image sensor were developed using a flattening filter and a collimator in order to alleviate the anisotropy of the distribution of X-rays due to intrinsic features of the X-ray generator. The proposed method, which is combined with MCNP5 modeling and numerical analysis, aimed to optimize a flattening filter and a collimator for a uniform distribution of X-rays. Their size and shape were estimated from the method. The simulation and the experimental results both showed that the method yielded an intensity distribution over an X-ray field of 6×4 cm2 at SID (source to image-receptor distance) of 5 cm with a uniformity of more than 90% when the flattening filter and the collimator were mounted on the system. The proposed algorithm and technique are not only confined to flattening filter development but can also be applied for other X-ray related research and development efforts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100006915','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100006915"><span>A Systematic Approach for Model-Based Aircraft Engine Performance Estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Simon, Donald L.; Garg, Sanjay</p> <p>2010-01-01</p> <p>A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22167418-identifying-ionized-regions-noisy-redshifted-cm-data-sets','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22167418-identifying-ionized-regions-noisy-redshifted-cm-data-sets"><span>IDENTIFYING IONIZED REGIONS IN NOISY REDSHIFTED 21 cm DATA SETS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Malloy, Matthew; Lidz, Adam, E-mail: mattma@sas.upenn.edu</p> <p></p> <p>One of the most promising approaches for studying reionization is to use the redshifted 21 cm line. Early generations of redshifted 21 cm surveys will not, however, have the sensitivity to make detailed maps of the reionization process, and will instead focus on statistical measurements. Here, we show that it may nonetheless be possible to directly identify ionized regions in upcoming data sets by applying suitable filters to the noisy data. The locations of prominent minima in the filtered data correspond well with the positions of ionized regions. In particular, we corrupt semi-numeric simulations of the redshifted 21 cm signalmore » during reionization with thermal noise at the level expected for a 500 antenna tile version of the Murchison Widefield Array (MWA), and mimic the degrading effects of foreground cleaning. Using a matched filter technique, we find that the MWA should be able to directly identify ionized regions despite the large thermal noise. In a plausible fiducial model in which {approx}20% of the volume of the universe is neutral at z {approx} 7, we find that a 500-tile MWA may directly identify as many as {approx}150 ionized regions in a 6 MHz portion of its survey volume and roughly determine the size of each of these regions. This may, in turn, allow interesting multi-wavelength follow-up observations, comparing galaxy properties inside and outside of ionized regions. We discuss how the optimal configuration of radio antenna tiles for detecting ionized regions with a matched filter technique differs from the optimal design for measuring power spectra. These considerations have potentially important implications for the design of future redshifted 21 cm surveys.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003SPIE.5146..127L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003SPIE.5146..127L"><span>Selection vector filter framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.</p> <p>2003-10-01</p> <p>We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhyA..390.4486Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhyA..390.4486Z"><span>Negative ratings play a positive role in information filtering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zeng, Wei; Zhu, Yu-Xiao; Lü, Linyuan; Zhou, Tao</p> <p>2011-11-01</p> <p>The explosive growth of information asks for advanced information filtering techniques to solve the so-called information overload problem. A promising way is the recommender system which analyzes the historical records of users’ activities and accordingly provides personalized recommendations. Most recommender systems can be represented by user-object bipartite networks where users can evaluate and vote for objects, and ratings such as “dislike” and “I hate it” are treated straightforwardly as negative factors or are completely ignored in traditional approaches. Applying a local diffusion algorithm on three benchmark data sets, MovieLens, Netflix and Amazon, our study arrives at a very surprising result, namely the negative ratings may play a positive role especially for very sparse data sets. In-depth analysis at the microscopic level indicates that the negative ratings from less active users to less popular objects could probably have positive impacts on the recommendations, while the ones connecting active users and popular objects mostly should be treated negatively. We finally outline the significant relevance of our results to the two long-term challenges in information filtering: the sparsity problem and the cold-start problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4200768','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4200768"><span>Recovering Bioactive Compounds from Olive Oil Filter Cake by Advanced Extraction Techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lozano-Sánchez, Jesús; Castro-Puyana, María; Mendiola, Jose A.; Segura-Carretero, Antonio; Cifuentes, Alejandro; Ibáñez, Elena</p> <p>2014-01-01</p> <p>The potential of by-products generated during extra-virgin olive oil (EVOO) filtration as a natural source of phenolic compounds (with demonstrated bioactivity) has been evaluated using pressurized liquid extraction (PLE) and considering mixtures of two GRAS (generally recognized as safe) solvents (ethanol and water) at temperatures ranging from 40 to 175 °C. The extracts were characterized by high-performance liquid chromatography (HPLC) coupled to diode array detection (DAD) and electrospray time-of-flight mass spectrometry (HPLC-DAD-ESI-TOF/MS) to determine the phenolic-composition of the filter cake. The best isolation procedure to extract the phenolic fraction from the filter cake was accomplished using ethanol and water (50:50, v/v) at 120 °C. The main phenolic compounds identified in the samples were characterized as phenolic alcohols or derivatives (hydroxytyrosol and its oxidation product), secoiridoids (decarboxymethylated and hydroxylated forms of oleuropein and ligstroside aglycones), flavones (luteolin and apigenin) and elenolic acid derivatives. The PLE extraction process can be applied to produce enriched extracts with applications as bioactive food ingredients, as well as nutraceuticals. PMID:25226536</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29952752','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29952752"><span>Online EEG artifact removal for BCI applications by adaptive spatial filtering.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guarnieri, Roberto; Marino, Marco; Barban, Federico; Ganzetti, Marco; Mantini, Dante</p> <p>2018-06-28</p> <p>The performance of brain computer interfaces (BCIs) based on electroencephalography (EEG) data strongly depends on the effective attenuation of artifacts that are mixed in the recordings. To address this problem, we have developed a novel online EEG artifact removal method for BCI applications, which combines blind source separation (BSS) and regression (REG) analysis. The BSS-REG method relies on the availability of a calibration dataset of limited duration for the initialization of a spatial filter using BSS. Online artifact removal is implemented by dynamically adjusting the spatial filter in the actual experiment, based on a linear regression technique. Our results showed that the BSS-REG method is capable of attenuating different kinds of artifacts, including ocular and muscular, while preserving true neural activity. Thanks to its low computational requirements, BSS-REG can be applied to low-density as well as high-density EEG data. We argue that BSS-REG may enable the development of novel BCI applications requiring high-density recordings, such as source-based neurofeedback and closed-loop neuromodulation. © 2018 IOP Publishing Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120016019','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120016019"><span>Adaptive Control of Linear Modal Systems Using Residual Mode Filters and a Simple Disturbance Estimator</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Balas, Mark; Frost, Susan</p> <p>2012-01-01</p> <p>Flexible structures containing a large number of modes can benefit from adaptive control techniques which are well suited to applications that have unknown modeling parameters and poorly known operating conditions. In this paper, we focus on a direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend our adaptive control theory to accommodate troublesome modal subsystems of a plant that might inhibit the adaptive controller. In some cases the plant does not satisfy the requirements of Almost Strict Positive Realness. Instead, there maybe be a modal subsystem that inhibits this property. This section will present new results for our adaptive control theory. We will modify the adaptive controller with a Residual Mode Filter (RMF) to compensate for the troublesome modal subsystem, or the Q modes. Here we present the theory for adaptive controllers modified by RMFs, with attention to the issue of disturbances propagating through the Q modes. We apply the theoretical results to a flexible structure example to illustrate the behavior with and without the residual mode filter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4063007','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4063007"><span>An Adaptive Scheme for Robot Localization and Mapping with Dynamically Configurable Inter-Beacon Range Measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal</p> <p>2014-01-01</p> <p>This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24776938','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24776938"><span>An adaptive scheme for robot localization and mapping with dynamically configurable inter-beacon range measurements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal</p> <p>2014-04-25</p> <p>This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JPRS...65..165P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JPRS...65..165P"><span>Delineation and geometric modeling of road networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Poullis, Charalambos; You, Suya</p> <p></p> <p>In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JEI....19b1105G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JEI....19b1105G"><span>Adaptive color demosaicing and false color removal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guarnera, Mirko; Messina, Giuseppe; Tomaselli, Valeria</p> <p>2010-04-01</p> <p>Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012APS..DFDD14009T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012APS..DFDD14009T"><span>Global Temperature Measurement of Supercooled Water under Icing Conditions using Two-Color Luminescent Images and Multi-Band Filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tanaka, Mio; Morita, Katsuaki; Kimura, Shigeo; Sakaue, Hirotaka</p> <p>2012-11-01</p> <p>Icing occurs by a collision of a supercooled-water droplet on a surface. It can be seen in any cold area. A great attention is paid in an aircraft icing. To understand the icing process on an aircraft, it is necessary to give the temperature information of the supercooled water. A conventional technique, such as a thermocouple, is not valid, because it becomes a collision surface that accumulates ice. We introduce a dual-luminescent imaging to capture a global temperature distribution of supercooled water under the icing conditions. It consists of two-color luminescent probes and a multi-band filter. One of the probes is sensitive to the temperature and the other is independent of the temperature. The latter is used to cancel the temperature-independent luminescence of a temperature-dependent image caused by an uneven illumination and a camera location. The multi-band filter only selects the luminescent peaks of the probes to enhance the temperature sensitivity of the imaging system. By applying the system, the time-resolved temperature information of a supercooled-water droplet is captured.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27858248','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27858248"><span>Morphology filter bank for extracting nodular and linear patterns in medical images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki</p> <p>2017-04-01</p> <p>Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AIPC.1660g0111A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AIPC.1660g0111A"><span>The cascaded moving k-means and fuzzy c-means clustering algorithms for unsupervised segmentation of malaria images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida</p> <p>2015-05-01</p> <p>Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70188405','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70188405"><span>Destriping of Landsat MSS images by filtering techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Pan, Jeng-Jong; Chang, Chein-I</p> <p>1992-01-01</p> <p>: The removal of striping noise encountered in the Landsat Multispectral Scanner (MSS) images can be generally done by using frequency filtering techniques. Frequency do~ain filteri~g has, how~ver, se,:era~ prob~ems~ such as storage limitation of data required for fast Fourier transforms, nngmg artl~acts appe~nng at hlgh-mt,enslty.dlscontinuities, and edge effects between adjacent filtered data sets. One way for clrcu~,,:entmg the above difficulties IS, to design a spatial filter to convolve with the images. Because it is known that the,stnpmg a.lways appears at frequencies of 1/6, 1/3, and 1/2 cycles per line, it is possible to design a simple one-dimensIOnal spat~a~ fll,ter to take advantage of this a priori knowledge to cope with the above problems. The desired filter is the type of ~mlte Impuls~ response which can be designed by a linear programming and Remez's exchange algorithm coupled ~lth an adaptIve tec,hmque. In addition, a four-step spatial filtering technique with an appropriate adaptive approach IS also presented which may be particularly useful for geometrically rectified MSS images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010lyot.confE..66S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010lyot.confE..66S"><span>Experimental Verification of Bayesian Planet Detection Algorithms with a Shaped Pupil Coronagraph</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Savransky, D.; Groff, T. D.; Kasdin, N. J.</p> <p>2010-10-01</p> <p>We evaluate the feasibility of applying Bayesian detection techniques to discovering exoplanets using high contrast laboratory data with simulated planetary signals. Background images are generated at the Princeton High Contrast Imaging Lab (HCIL), with a coronagraphic system utilizing a shaped pupil and two deformable mirrors (DMs) in series. Estimates of the electric field at the science camera are used to correct for quasi-static speckle and produce symmetric high contrast dark regions in the image plane. Planetary signals are added in software, or via a physical star-planet simulator which adds a second off-axis point source before the coronagraph with a beam recombiner, calibrated to a fixed contrast level relative to the source. We produce a variety of images, with varying integration times and simulated planetary brightness. We then apply automated detection algorithms such as matched filtering to attempt to extract the planetary signals. This allows us to evaluate the efficiency of these techniques in detecting planets in a high noise regime and eliminating false positives, as well as to test existing algorithms for calculating the required integration times for these techniques to be applicable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19760039852&hterms=application+Fourier&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dapplication%2BFourier','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19760039852&hterms=application+Fourier&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dapplication%2BFourier"><span>Applications of charge-coupled device transversal filters to communication</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buss, D. D.; Bailey, W. H.; Brodersen, R. W.; Hewes, C. R.; Tasch, A. F., Jr.</p> <p>1975-01-01</p> <p>The paper discusses the computational power of state-of-the-art charged-coupled device (CCD) transversal filters in communications applications. Some of the performance limitations of CCD transversal filters are discussed, with attention given to time delay and bandwidth, imperfect charge transfer efficiency, weighting coefficient error, noise, and linearity. The application of CCD transversal filters to matched filtering, spectral filtering, and Fourier analysis is examined. Techniques for making programmable transversal filters are briefly outlined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23252633','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23252633"><span>Ring-oven based preconcentration technique for microanalysis: simultaneous determination of Na, Fe, and Cu in fuel ethanol by laser induced breakdown spectroscopy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cortez, Juliana; Pasquini, Celio</p> <p>2013-02-05</p> <p>The ring-oven technique, originally applied for classical qualitative analysis in the years 1950s to 1970s, is revisited to be used in a simple though highly efficient and green procedure for analyte preconcentration prior to its determination by the microanalytical techniques presently available. The proposed preconcentration technique is based on the dropwise delivery of a small volume of sample to a filter paper substrate, assisted by a flow-injection-like system. The filter paper is maintained in a small circular heated oven (the ring oven). Drops of the sample solution diffuse by capillarity from the center to a circular area of the paper substrate. After the total sample volume has been delivered, a ring with a sharp (c.a. 350 μm) circular contour, of about 2.0 cm diameter, is formed on the paper to contain most of the analytes originally present in the sample volume. Preconcentration coefficients of the analyte can reach 250-fold (on a m/m basis) for a sample volume as small as 600 μL. The proposed system and procedure have been evaluated to concentrate Na, Fe, and Cu in fuel ethanol, followed by simultaneous direct determination of these species in the ring contour, employing the microanalytical technique of laser induced breakdown spectroscopy (LIBS). Detection limits of 0.7, 0.4, and 0.3 μg mL(-1) and mean recoveries of (109 ± 13)%, (92 ± 18)%, and (98 ± 12)%, for Na, Fe, and Cu, respectively, were obtained in fuel ethanol. It is possible to anticipate the application of the technique, coupled to modern microanalytical and multianalyte techniques, to several analytical problems requiring analyte preconcentration and/or sample stabilization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1185830-situ-strain-temperature-measurement-modelling-during-arc-welding','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1185830-situ-strain-temperature-measurement-modelling-during-arc-welding"><span>In situ strain and temperature measurement and modelling during arc welding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Chen, Jian; Yu, Xinghua; Miller, Roger G.; ...</p> <p>2014-12-26</p> <p>In this study, experiments and numerical models were applied to investigate the thermal and mechanical behaviours of materials adjacent to the weld pool during arc welding. In the experiment, a new high temperature strain measurement technique based on digital image correlation (DIC) was developed and applied to measure the in situ strain evolution. In contrast to the conventional DIC method that is vulnerable to the high temperature and intense arc light involved in fusion welding processes, the new technique utilised a special surface preparation method to produce high temperature sustaining speckle patterns required by the DIC algorithm as well asmore » a unique optical illumination and filtering system to suppress the influence of the intense arc light. These efforts made it possible for the first time to measure in situ the strain field 1 mm away from the fusion line. The temperature evolution in the weld and the adjacent regions was simultaneously monitored by an infrared camera. Finally and additionally, a thermal–mechanical finite element model was applied to substantiate the experimental measurement.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.G33C..07B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.G33C..07B"><span>Independent Assessment of ITRF Site Velocities using GPS Imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Blewitt, G.; Hammond, W. C.; Kreemer, C.; Altamimi, Z.</p> <p>2015-12-01</p> <p>The long-term stability of ITRF is critical to the most challenging scientific applications such as the slow variation of sea level, and of ice sheet loading in Greenland and Antarctica. In 2010, the National Research Council recommended aiming for stability at the level of 1 mm/decade in the ITRF origin and scale. This requires that the ITRF include many globally-distributed sites with motions that are predictable to within a few mm/decade, with a significant number of sites having collocated stations of multiple techniques. Quantifying the stability of ITRF stations can be useful to understand stability of ITRF parameters, and to help the selection and weighting of ITRF stations. Here we apply a new suite of techniques for an independent assessment of ITRF site velocities. Our "GPS Imaging" suite is founded on the principle that, for the case of large numbers of data, the trend can be estimated objectively, automatically, robustly, and accurately by applying non-parametric techniques, which use quantile statistics (e.g., the median). At the foundation of GPS Imaging is the estimator "MIDAS" (Median Interannual Difference Adjusted for Skewness). MIDAS estimates the velocity with a realistic error bar based on sub-sampling the coordinate time series. MIDAS is robust to step discontinuities, outliers, seasonality, and heteroscedasticity. Common-mode noise filters enhance regional- to continental-scale precision in MIDAS estimates, just as they do for standard estimation techniques. Secondly, in regions where there is sufficient spatial sampling, GPS Imaging uses MIDAS velocity estimates to generate a regionally-representative velocity map. For this we apply a median spatial filter to despeckle the maps. We use GPS Imaging to address two questions: (1) How well do the ITRF site velocities derived by parametric estimation agree with non-parametric techniques? (2) Are ITRF site velocities regionally representative? These questions aim to get a handle on (1) the accuracy of ITRF site velocities as a function of characteristics of contributing station data, such as number of step parameters and total time span; and (2) evidence of local processes affecting site velocity, which may impact site stability. Such quantification can be used to rank stations in terms the risk that they may pose to the stability of ITRF.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1434006-source-synchronous-filter-uncorrelated-receiver-traces-from-swept-frequency-seismic-source','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1434006-source-synchronous-filter-uncorrelated-receiver-traces-from-swept-frequency-seismic-source"><span>A source-synchronous filter for uncorrelated receiver traces from a swept-frequency seismic source</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Lord, Neal; Wang, Herbert; Fratta, Dante</p> <p>2016-09-01</p> <p>We have developed a novel algorithm to reduce noise in signals obtained from swept-frequency sources by removing out-of-band external noise sources and distortion caused from unwanted harmonics. The algorithm is designed to condition nonstationary signals for which traditional frequency-domain methods for removing noise have been less effective. The source synchronous filter (SSF) is a time-varying narrow band filter, which is synchronized with the frequency of the source signal at all times. Because the bandwidth of the filter needs to account for the source-to-receiver propagation delay and the sweep rate, SSF works best with slow sweep rates and moveout-adjusted waveforms tomore » compensate for source-receiver delays. The SSF algorithm was applied to data collected during a field test at the University of California Santa Barbara’s Garner Valley downhole array site in Southern California. At the site, a 45 kN shaker was mounted on top of a one-story structure and swept from 0 to 10 Hz and back over 60 s (producing useful seismic waves greater than 1.6 Hz). The seismic data were captured with small accelerometer and geophone arrays and with a distributed acoustic sensing array, which is a fiber-optic-based technique for the monitoring of elastic waves. The result of the application of SSF on the field data is a set of undistorted and uncorrelated traces that can be used in different applications, such as measuring phase velocities of surface waves or applying convolution operations with the encoder source function to obtain traveltimes. Lastly, the results from the SSF were used with a visual phase alignment tool to facilitate developing dispersion curves and as a prefilter to improve the interpretation of the data.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29907413','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29907413"><span>Development and optimization of a solid-phase microextraction gas chromatography-tandem mass spectrometry methodology to analyse ultraviolet filters in beach sand.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vila, Marlene; Llompart, Maria; Garcia-Jares, Carmen; Homem, Vera; Dagnac, Thierry</p> <p>2018-06-06</p> <p>A methodology based on solid-phase microextraction (SPME) followed by gas chromatography-tandem mass spectrometry (GC-MS/MS) has been developed for the simultaneous analysis of eleven multiclass ultraviolet (UV) filters in beach sand. To the best of our knowledge, this is the first time that this extraction technique is applied to the analysis of UV filters in sand samples, and in other kind of environmental solid samples. Main extraction parameters such as the fibre coating, the amount of sample, the addition of salt, the volume of water added to the sand, and the temperature were optimized. An experimental design approach was implemented in order to find out the most favourable conditions. The final conditions consisted of adding 1 mL of water to 1 g of sample followed by the headspace SPME for 20 min at 100 °C, using PDMS/DVB as fibre coating. The SPME-GC-MS/MS method was validated in terms of linearity, accuracy, limits of detection and quantification, and precision. Recovery studies were also performed at three concentration levels in real Atlantic and Mediterranean sand samples. The recoveries were generally above 85% and relative standard deviations below 11%. The limits of detection were in the pg g -1 level. The validated methodology was successfully applied to the analysis of real sand samples collected from Atlantic Ocean beaches in the Northwest coast of Spain and Portugal, Canary Islands (Spain), and from Mediterranean Sea beaches in Mallorca Island (Spain). The most frequently found UV filters were ethylhexyl salicylate (EHS), homosalate (HMS), 4-methylbenzylidene camphor (4MBC), 2-ethylhexyl methoxycinnamate (2EHMC) and octocrylene (OCR), with concentrations up to 670 ng g -1 . Copyright © 2018 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1434006','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1434006"><span>A source-synchronous filter for uncorrelated receiver traces from a swept-frequency seismic source</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lord, Neal; Wang, Herbert; Fratta, Dante</p> <p></p> <p>We have developed a novel algorithm to reduce noise in signals obtained from swept-frequency sources by removing out-of-band external noise sources and distortion caused from unwanted harmonics. The algorithm is designed to condition nonstationary signals for which traditional frequency-domain methods for removing noise have been less effective. The source synchronous filter (SSF) is a time-varying narrow band filter, which is synchronized with the frequency of the source signal at all times. Because the bandwidth of the filter needs to account for the source-to-receiver propagation delay and the sweep rate, SSF works best with slow sweep rates and moveout-adjusted waveforms tomore » compensate for source-receiver delays. The SSF algorithm was applied to data collected during a field test at the University of California Santa Barbara’s Garner Valley downhole array site in Southern California. At the site, a 45 kN shaker was mounted on top of a one-story structure and swept from 0 to 10 Hz and back over 60 s (producing useful seismic waves greater than 1.6 Hz). The seismic data were captured with small accelerometer and geophone arrays and with a distributed acoustic sensing array, which is a fiber-optic-based technique for the monitoring of elastic waves. The result of the application of SSF on the field data is a set of undistorted and uncorrelated traces that can be used in different applications, such as measuring phase velocities of surface waves or applying convolution operations with the encoder source function to obtain traveltimes. Lastly, the results from the SSF were used with a visual phase alignment tool to facilitate developing dispersion curves and as a prefilter to improve the interpretation of the data.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29519050','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29519050"><span>Full-color large-scaled computer-generated holograms using RGB color filters.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tsuchiyama, Yasuhiro; Matsushima, Kyoji</p> <p>2017-02-06</p> <p>A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28816067','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28816067"><span>Post-acquisition data mining techniques for LC-MS/MS-acquired data in drug metabolite identification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dhurjad, Pooja Sukhdev; Marothu, Vamsi Krishna; Rathod, Rajeshwari</p> <p>2017-08-01</p> <p>Metabolite identification is a crucial part of the drug discovery process. LC-MS/MS-based metabolite identification has gained widespread use, but the data acquired by the LC-MS/MS instrument is complex, and thus the interpretation of data becomes troublesome. Fortunately, advancements in data mining techniques have simplified the process of data interpretation with improved mass accuracy and provide a potentially selective, sensitive, accurate and comprehensive way for metabolite identification. In this review, we have discussed the targeted (extracted ion chromatogram, mass defect filter, product ion filter, neutral loss filter and isotope pattern filter) and untargeted (control sample comparison, background subtraction and metabolomic approaches) post-acquisition data mining techniques, which facilitate the drug metabolite identification. We have also discussed the importance of integrated data mining strategy.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820005272','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820005272"><span>Sensor failure detection system. [for the F100 turbofan engine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Beattie, E. C.; Laprad, R. F.; Mcglone, M. E.; Rock, S. M.; Akhter, M. M.</p> <p>1981-01-01</p> <p>Advanced concepts for detecting, isolating, and accommodating sensor failures were studied to determine their applicability to the gas turbine control problem. Five concepts were formulated based upon such techniques as Kalman filters and a screening process led to the selection of one advanced concept for further evaluation. The selected advanced concept uses a Kalman filter to generate residuals, a weighted sum square residuals technique to detect soft failures, likelihood ratio testing of a bank of Kalman filters for isolation, and reconfiguring of the normal mode Kalman filter by eliminating the failed input to accommodate the failure. The advanced concept was compared to a baseline parameter synthesis technique. The advanced concept was shown to be a viable concept for detecting, isolating, and accommodating sensor failures for the gas turbine applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=recommender+AND+system&pg=4&id=EJ888516','ERIC'); return false;" href="https://eric.ed.gov/?q=recommender+AND+system&pg=4&id=EJ888516"><span>Automating "Word of Mouth" to Recommend Classes to Students: An Application of Social Information Filtering Algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Booker, Queen Esther</p> <p>2009-01-01</p> <p>An approach used to tackle the problem of helping online students find the classes they want and need is a filtering technique called "social information filtering," a general approach to personalized information filtering. Social information filtering essentially automates the process of "word-of-mouth" recommendations: items are recommended to a…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21342295','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21342295"><span>Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rajab, Maher I</p> <p>2011-11-01</p> <p>Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013NIMPA.699..112O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013NIMPA.699..112O"><span>Development of signal processing system of avalanche photo diode for space observations by Astro-H</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ohno, M.; Goto, K.; Hanabata, Y.; Takahashi, H.; Fukazawa, Y.; Yoshino, M.; Saito, T.; Nakamori, T.; Kataoka, J.; Sasano, M.; Torii, S.; Uchiyama, H.; Nakazawa, K.; Watanabe, S.; Kokubun, M.; Ohta, M.; Sato, T.; Takahashi, T.; Tajima, H.</p> <p>2013-01-01</p> <p>Astro-H is the sixth Japanese X-ray space observatory which will be launched in 2014. Two of onboard instruments of Astro-H, Hard X-ray Imager and Soft Gamma-ray Detector are surrounded by many number of large Bismuth Germanate (Bi4Ge3O12; BGO) scintillators. Optimum readout system of scintillation lights from these BGOs are essential to reduce the background signals and achieve high performance for main detectors because most of gamma-rays from out of field-of-view of main detectors or radio-isotopes produced inside them due to activation can be eliminated by anti-coincidence technique using BGO signals. We apply Avalanche Photo Diode (APD) for light sensor of these BGO detectors since their compactness and high quantum efficiency make it easy to design such large number of BGO detector system. For signal processing from APDs, digital filter and other trigger logics on the Field-Programmable Gate Array (FPGA) is used instead of discrete analog circuits due to limitation of circuit implementation area on spacecraft. For efficient observations, we have to achieve as low threshold of anti-coincidence signal as possible by utilizing the digital filtering. In addition, such anti-coincident signals should be sent to the main detector within 5 μs to make it in time to veto the A-D conversion. Considering this requirement and constraint from logic size of FPGA, we adopt two types of filter, 8 delay taps filter with only 2 bit precision coefficient and 16 delay taps filter with 8 bit precision coefficient. The data after former simple filter provides anti-coincidence signal quickly in orbit, and the latter filter is used for detail analysis after the data is down-linked.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10421E..1FS','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10421E..1FS"><span>Applying a particle filtering technique for canola crop growth stage estimation in Canada</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sinha, Abhijit; Tan, Weikai; Li, Yifeng; McNairn, Heather; Jiao, Xianfeng; Hosseini, Mehdi</p> <p>2017-10-01</p> <p>Accurate crop growth stage estimation is important in precision agriculture as it facilitates improved crop management, pest and disease mitigation and resource planning. Earth observation imagery, specifically Synthetic Aperture Radar (SAR) data, can provide field level growth estimates while covering regional scales. In this paper, RADARSAT-2 quad polarization and TerraSAR-X dual polarization SAR data and ground truth growth stage data are used to model the influence of canola growth stages on SAR imagery extracted parameters. The details of the growth stage modeling work are provided, including a) the development of a new crop growth stage indicator that is continuous and suitable as the state variable in the dynamic estimation procedure; b) a selection procedure for SAR polarimetric parameters that is sensitive to both linear and nonlinear dependency between variables; and c) procedures for compensation of SAR polarimetric parameters for different beam modes. The data was collected over three crop growth seasons in Manitoba, Canada, and the growth model provides the foundation of a novel dynamic filtering framework for real-time estimation of canola growth stages using the multi-sensor and multi-mode SAR data. A description of the dynamic filtering framework that uses particle filter as the estimator is also provided in this paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1426655-alpha-air-sample-counting-efficiency-versus-dust-loading-evaluation-large-data-set','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1426655-alpha-air-sample-counting-efficiency-versus-dust-loading-evaluation-large-data-set"><span>Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.</p> <p></p> <p>Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1426655-alpha-air-sample-counting-efficiency-versus-dust-loading-evaluation-large-data-set','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1426655-alpha-air-sample-counting-efficiency-versus-dust-loading-evaluation-large-data-set"><span>Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.; ...</p> <p>2018-03-03</p> <p>Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1423237','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1423237"><span>Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava</p> <p></p> <p>Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Examples include the Intel Xeon Phi, GPGPUs, and similar technologies. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems ismore » expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9999E..0HA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9999E..0HA"><span>Neural network retrievals of Karenia brevis harmful algal blooms in the West Florida Shelf (Conference Presentation)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ahmed, Samir; El-Habashi, Ahmed</p> <p>2016-10-01</p> <p>Effective detection and tracking of Karenia brevis Harmful Algal Blooms (KB HAB) that frequently plague the coasts and beaches of the West Florida Shelf (WFS) is important because of their negative impacts on ecology. They pose threats to fisheries, human health, and directly affect tourism and local economies. Detection and tracking capabilities are needed for use with the Visible Infrared Imaging Radiometer Suite (VIIRS) satellite, so that HABs monitoring capabilities, which previously relied on imagery from the Moderate Resolution Imaging Spectroradiometer Aqua, can be extended to VIIRS. Unfortunately, VIIRS, unlike its predecessor MODIS-A, does not have a 678 nm channel to detect chlorophyll fluorescence, which is used in the normalized fluorescence height (nFLH) algorithm, or in the Red Band Difference (RBD) algorithm. Both these techniques have demonstrated that the remote sensing reflectance signal from the MODIS-A fluorescence band (Rrs 678 nm) helps in effectively detecting and tracking KB HABs in the WFS. To overcome the lack of a fluorescence channel on VIIRS, the approach described here, bypasses the need for measurements at 678nm, and permits extension of KB HABs satellite monitoring to VIIRS. The essence of the approach is the application of a standard multiband neural network (NN) inversion algorithm, previously developed and reported by us, that takes VIIRS Rrs measurements at the 486, 551 and 671nm bands as inputs, and produces as output the related Inherent Optical Properties (IOPs), namely: absorption coefficients of phytoplankton (aph443) dissolved organic matter (ag) and non-algal particulates (adm) as well as the particulate backscatter coefficient, (bbp) all at 443nm. We next need to relate aph443 in the VIIRS NN retrieved image to equivalent KB HABs concentrations. To do this, we apply additional constraints, defined by (i) low backscatter manifested as a maximum Rrs551 value and (ii) a minimum [Chla] threshold (and hence an equivalent minimum aph443min value) that are both known to be associated with KB HABs in the WFS. These two constraining filter processes are applied sequentially to the VIIRS NN retrieved aph443 image. First an image is made of retrieved VIIRS Rrs551. A mask is then made of all pixels with Rrs551≥ Rrs551max, the maximum value known to be compatible with the existence KB HABs. This is applied, as a filter to the VIIRS NN retrieved aph443 image to exclude pixels with Rrs551≥ Rrs551max. The residual image will then only show aph443 values that comply with Rrs551≤ Rrs551max. Then, in a second filter process, all values of aph443 ≤ aph443min are eliminated. The residual image will now only show aph443 values that are compatible with both criteria for KB HABs, and are therefore representative of KB HABs. It will be shown that when both these filter condition are applied to VIIRS NN aph443 retrievals, they can be used to effectively delineate and quantify KB HABs in the WFS. The KB HABs retrieved in this manner also show good correlations with in-situ KB HABs measurements as well as with nFLH retrievals and other techniques to which the same filtering criteria have been applied, confirming the viability of the approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4029942','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4029942"><span>An adaptive Kalman filter approach for cardiorespiratory signal extraction and fusion of non-contacting sensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Background Extracting cardiorespiratory signals from non-invasive and non-contacting sensor arrangements, i.e. magnetic induction sensors, is a challenging task. The respiratory and cardiac signals are mixed on top of a large and time-varying offset and are likely to be disturbed by measurement noise. Basic filtering techniques fail to extract relevant information for monitoring purposes. Methods We present a real-time filtering system based on an adaptive Kalman filter approach that separates signal offsets, respiratory and heart signals from three different sensor channels. It continuously estimates respiration and heart rates, which are fed back into the system model to enhance performance. Sensor and system noise covariance matrices are automatically adapted to the aimed application, thus improving the signal separation capabilities. We apply the filtering to two different subjects with different heart rates and sensor properties and compare the results to the non-adaptive version of the same Kalman filter. Also, the performance, depending on the initialization of the filters, is analyzed using three different configurations ranging from best to worst case. Results Extracted data are compared with reference heart rates derived from a standard pulse-photoplethysmographic sensor and respiration rates from a flowmeter. In the worst case for one of the subjects the adaptive filter obtains mean errors (standard deviations) of -0.2 min −1 (0.3 min −1) and -0.7 bpm (1.7 bpm) (compared to -0.2 min −1 (0.4 min −1) and 42.0 bpm (6.1 bpm) for the non-adaptive filter) for respiration and heart rate, respectively. In bad conditions the heart rate is only correctly measurable when the Kalman matrices are adapted to the target sensor signals. Also, the reduced mean error between the extracted offset and the raw sensor signal shows that adapting the Kalman filter continuously improves the ability to separate the desired signals from the raw sensor data. The average total computational time needed for the Kalman filters is under 25% of the total signal length rendering it possible to perform the filtering in real-time. Conclusions It is possible to measure in real-time heart and breathing rates using an adaptive Kalman filter approach. Adapting the Kalman filter matrices improves the estimation results and makes the filter universally deployable when measuring cardiorespiratory signals. PMID:24886253</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24886253','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24886253"><span>An adaptive Kalman filter approach for cardiorespiratory signal extraction and fusion of non-contacting sensors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Foussier, Jerome; Teichmann, Daniel; Jia, Jing; Misgeld, Berno; Leonhardt, Steffen</p> <p>2014-05-09</p> <p>Extracting cardiorespiratory signals from non-invasive and non-contacting sensor arrangements, i.e. magnetic induction sensors, is a challenging task. The respiratory and cardiac signals are mixed on top of a large and time-varying offset and are likely to be disturbed by measurement noise. Basic filtering techniques fail to extract relevant information for monitoring purposes. We present a real-time filtering system based on an adaptive Kalman filter approach that separates signal offsets, respiratory and heart signals from three different sensor channels. It continuously estimates respiration and heart rates, which are fed back into the system model to enhance performance. Sensor and system noise covariance matrices are automatically adapted to the aimed application, thus improving the signal separation capabilities. We apply the filtering to two different subjects with different heart rates and sensor properties and compare the results to the non-adaptive version of the same Kalman filter. Also, the performance, depending on the initialization of the filters, is analyzed using three different configurations ranging from best to worst case. Extracted data are compared with reference heart rates derived from a standard pulse-photoplethysmographic sensor and respiration rates from a flowmeter. In the worst case for one of the subjects the adaptive filter obtains mean errors (standard deviations) of -0.2 min(-1) (0.3 min(-1)) and -0.7 bpm (1.7 bpm) (compared to -0.2 min(-1) (0.4 min(-1)) and 42.0 bpm (6.1 bpm) for the non-adaptive filter) for respiration and heart rate, respectively. In bad conditions the heart rate is only correctly measurable when the Kalman matrices are adapted to the target sensor signals. Also, the reduced mean error between the extracted offset and the raw sensor signal shows that adapting the Kalman filter continuously improves the ability to separate the desired signals from the raw sensor data. The average total computational time needed for the Kalman filters is under 25% of the total signal length rendering it possible to perform the filtering in real-time. It is possible to measure in real-time heart and breathing rates using an adaptive Kalman filter approach. Adapting the Kalman filter matrices improves the estimation results and makes the filter universally deployable when measuring cardiorespiratory signals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21707676','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21707676"><span>Detection of respiratory viruses on air filters from aircraft.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Korves, T M; Johnson, D; Jones, B W; Watson, J; Wolk, D M; Hwang, G M</p> <p>2011-09-01</p> <p>To evaluate the feasibility of identifying viruses from aircraft cabin air, we evaluated whether respiratory viruses trapped by commercial aircraft air filters can be extracted and detected using a multiplex PCR, bead-based assay. The ResPlex II assay was first tested for its ability to detect inactivated viruses applied to new filter material; all 18 applications of virus at a high concentration were detected. The ResPlex II assay was then used to test for 18 respiratory viruses on 48 used air filter samples from commercial aircraft. Three samples tested positive for viruses, and three viruses were detected: rhinovirus, influenza A and influenza B. For 33 of 48 samples, internal PCR controls performed suboptimally, suggesting sample matrix effect. In some cases, influenza and rhinovirus RNA can be detected on aircraft air filters, even more than 10 days after the filters were removed from aircraft. With protocol modifications to overcome PCR inhibition, air filter sampling and the ResPlex II assay could be used to characterize viruses in aircraft cabin air. Information about viruses in aircraft could support public health measures to reduce disease transmission within aircraft and between cities. © The MITRE corporation. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1014900','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1014900"><span>Telemetry Modernization with Open Architecture Software-Defined Radio Technology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2016-01-01</p> <p>digital (A/D) con- vertors and separated into narrowband channels through digital down-conversion ( DDC ) techniques implemented in field-programmable...Lexington, MA 02420-9108 781-981-4204 Operations center Recording Filter FPGA DDC Filter Channel 1 Filter FPGA DDC Filter Channel n Wideband tuner A</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22156395-entrapment-guide-wire-inferior-vena-cava-filter-technique-removal','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22156395-entrapment-guide-wire-inferior-vena-cava-filter-technique-removal"><span>Entrapment of Guide Wire in an Inferior Vena Cava Filter: A Technique for Removal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Abdel-Aal, Ahmed Kamel, E-mail: akamel@uabmc.edu; Saddekni, Souheil; Hamed, Maysoon Farouk</p> <p></p> <p>Entrapment of a central venous catheter (CVC) guide wire in an inferior vena cava (IVC) filter is a rare, but reported complication during CVC placement. With the increasing use of vena cava filters (VCFs), this number will most likely continue to grow. The consequences of this complication can be serious, as continued traction upon the guide wire may result in filter dislodgement and migration, filter fracture, or injury to the IVC. We describe a case in which a J-tipped guide wire introduced through a left subclavian access without fluoroscopic guidance during CVC placement was entrapped at the apex of anmore » IVC filter. We describe a technique that we used successfully in removing the entrapped wire through the left subclavian access site. We also present simple useful recommendations to prevent this complication.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.6814E..0FS','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.6814E..0FS"><span>Rule-based fuzzy vector median filters for 3D phase contrast MRI segmentation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sundareswaran, Kartik S.; Frakes, David H.; Yoganathan, Ajit P.</p> <p>2008-02-01</p> <p>Recent technological advances have contributed to the advent of phase contrast magnetic resonance imaging (PCMRI) as standard practice in clinical environments. In particular, decreased scan times have made using the modality more feasible. PCMRI is now a common tool for flow quantification, and for more complex vector field analyses that target the early detection of problematic flow conditions. Segmentation is one component of this type of application that can impact the accuracy of the final product dramatically. Vascular segmentation, in general, is a long-standing problem that has received significant attention. Segmentation in the context of PCMRI data, however, has been explored less and can benefit from object-based image processing techniques that incorporate fluids specific information. Here we present a fuzzy rule-based adaptive vector median filtering (FAVMF) algorithm that in combination with active contour modeling facilitates high-quality PCMRI segmentation while mitigating the effects of noise. The FAVMF technique was tested on 111 synthetically generated PC MRI slices and on 15 patients with congenital heart disease. The results were compared to other multi-dimensional filters namely the adaptive vector median filter, the adaptive vector directional filter, and the scalar low pass filter commonly used in PC MRI applications. FAVMF significantly outperformed the standard filtering methods (p < 0.0001). Two conclusions can be drawn from these results: a) Filtering should be performed after vessel segmentation of PC MRI; b) Vector based filtering methods should be used instead of scalar techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22121027-grid-artifact-reduction-direct-digital-radiography-detectors-based-rotated-stationary-grids-homomorphic-filtering','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22121027-grid-artifact-reduction-direct-digital-radiography-detectors-based-rotated-stationary-grids-homomorphic-filtering"><span>Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kim, Dong Sik; Lee, Sanggyun</p> <p>2013-06-15</p> <p>Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9651E..08R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9651E..08R"><span>Ultra-wide-band 3D microwave imaging scanner for the detection of concealed weapons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rezgui, Nacer-Ddine; Andrews, David A.; Bowring, Nicholas J.</p> <p>2015-10-01</p> <p>The threat of concealed weapons, explosives and contraband in footwear, bags and suitcases has led to the development of new devices, which can be deployed for security screening. To address known deficiencies of metal detectors and x-rays, an UWB 3D microwave imaging scanning apparatus using FMCW stepped frequency working in the K and Q bands and with a planar scanning geometry based on an x y stage, has been developed to screen suspicious luggage and footwear. To obtain microwave images of the concealed weapons, the targets are placed above the platform and the single transceiver horn antenna attached to the x y stage is moved mechanically to perform a raster scan to create a 2D synthetic aperture array. The S11 reflection signal of the transmitted sweep frequency from the target is acquired by a VNA in synchronism with each position step. To enhance and filter from clutter and noise the raw data and to obtain the 2D and 3D microwave images of the concealed weapons or explosives, data processing techniques are applied to the acquired signals. These techniques include background subtraction, Inverse Fast Fourier Transform (IFFT), thresholding, filtering by gating and windowing and deconvolving with the transfer function of the system using a reference target. To focus the 3D reconstructed microwave image of the target in range and across the x y aperture without using focusing elements, 3D Synthetic Aperture Radar (SAR) techniques are applied to the post-processed data. The K and Q bands, between 15 to 40 GHz, show good transmission through clothing and dielectric materials found in luggage and footwear. A description of the system, algorithms and some results with replica guns and a comparison of microwave images obtained by IFFT, 2D and 3D SAR techniques are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1011045','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1011045"><span>Ballistic Imaging and Scattering Measurements for Diesel Spray Combustion: Optical Development and Phenomenological Studies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2016-04-01</p> <p>polystyrene spheres in a water suspension. The impact of spatial filtering , temporal filtering , and scattering path length on image resolution are...The impact of spatial filtering , temporal filtering , and scattering path length on image resolution are reported. The technique is demonstrated...cell filled with polystyrene spheres in a water suspension. The impact of spatial filtering , temporal filtering , and scattering path length on image</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040140880&hterms=cluster&qs=N%3D0%26Ntk%3DTitle%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcluster','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040140880&hterms=cluster&qs=N%3D0%26Ntk%3DTitle%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcluster"><span>Distant Cluster Hunting. II; A Comparison of X-Ray and Optical Cluster Detection Techniques and Catalogs from the ROSAT Optical X-Ray Survey</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Donahue, Megan; Scharf, Caleb A.; Mack, Jennifer; Lee, Y. Paul; Postman, Marc; Rosait, Piero; Dickinson, Mark; Voit, G. Mark; Stocke, John T.</p> <p>2002-01-01</p> <p>We present and analyze the optical and X-ray catalogs of moderate-redshift cluster candidates from the ROSA TOptical X-Ray Survey, or ROXS. The survey covers the sky area contained in the fields of view of 23 deep archival ROSA T PSPC pointings, 4.8 square degrees. The cross-correlated cluster catalogs were con- structed by comparing two independent catalogs extracted from the optical and X-ray bandpasses, using a matched-filter technique for the optical data and a wavelet technique for the X-ray data. We cross-identified cluster candidates in each catalog. As reported in Paper 1, the matched-filter technique found optical counter- parts for at least 60% (26 out of 43) of the X-ray cluster candidates; the estimated redshifts from the matched filter algorithm agree with at least 7 of 1 1 spectroscopic confirmations (Az 5 0.10). The matched filter technique. with an imaging sensitivity of ml N 23, identified approximately 3 times the number of candidates (155 candidates, 142 with a detection confidence >3 u) found in the X-ray survey of nearly the same area. There are 57 X-ray candidates, 43 of which are unobscured by scattered light or bright stars in the optical images. Twenty-six of these have fairly secure optical counterparts. We find that the matched filter algorithm, when applied to images with galaxy flux sensitivities of mI N 23, is fairly well-matched to discovering z 5 1 clusters detected by wavelets in ROSAT PSPC exposures of 8000-60,000 s. The difference in the spurious fractions between the optical and X-ray (30%) and IO%, respectively) cannot account for the difference in source number. In Paper I, we compared the optical and X-ray cluster luminosity functions and we found that the luminosity functions are consistent if the relationship between X-ray and optical luminosities is steep (Lx o( L&f). Here, in Paper 11, we present the cluster catalogs and a numerical simulation of the ROXS. We also present color-magnitude plots for several of the cluster candidates, and examine the prominence of the red sequence in each. We find that the X-ray clusters in our survey do not all have a prominent red sequence. We conclude that while the red sequence may be a distinct feature in the color-magnitude plots for virialized massive clusters, it may be less distinct in lower mass clusters of galaxies at even moderate redshifts. Multiple, complementary methods of selecting and defining clusters may be essential, particularly at high redshift where all methods start to run into completeness limits, incomplete understanding of physical evolution, and projection effects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014APS..DNP.GB142S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014APS..DNP.GB142S"><span>Investigating the Use of the Intel Xeon Phi for Event Reconstruction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sherman, Keegan; Gilfoyle, Gerard</p> <p>2014-09-01</p> <p>The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. Work supported by the University of Richmond and the US Department of Energy.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>